0% found this document useful (0 votes)
120 views80 pages

Logic-Rondo-2 3 0

This document is an introduction to propositional logic titled "Logic Rondo". It discusses the syntax and rules of propositional logic, including an introduction to logical connectives and deduction systems. It will cover Fitch-style natural deduction systems and how they relate to classical truth values and the concepts of soundness, consistency and completeness of logical systems. The document appears to provide a comprehensive overview of propositional logic for both human and computer understanding.

Uploaded by

daniel dus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
120 views80 pages

Logic-Rondo-2 3 0

This document is an introduction to propositional logic titled "Logic Rondo". It discusses the syntax and rules of propositional logic, including an introduction to logical connectives and deduction systems. It will cover Fitch-style natural deduction systems and how they relate to classical truth values and the concepts of soundness, consistency and completeness of logical systems. The document appears to provide a comprehensive overview of propositional logic for both human and computer understanding.

Uploaded by

daniel dus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

Logic Rondo

An Introduction to Logic for Humans and Computers

Henning Basold

T
Base revision 33eba78, (HEAD -> v2) from 2024-02-20
AF
e
ut
rib
ist
td
DR
no
Do
c ba Copyright © 2020–2024 Henning Basold, under a Cre-
ative Commons Attribution-ShareAlike 4.0 International License:
https://creativecommons.org/licenses/by-sa/4.0/.

Typeset with XƎLATEX

cb “Robot” on ⁇ by William Hollowell licensed under Creative Com-


mons BY, URL: https://thenounproject.com/term/r2d2/10452/
c b “obstacle” on ⁇ by Annette Spithoven licensed under Creative
Commons BY, URL: https://thenounproject.com/term/obstacle/
211167
c b “Heart” on ⁇ by Bohdan Burmich licensed under Creative Commons BY,
URL: https://thenounproject.com/term/heart/396287
Contents
Prologue 1

I. Syntax and Proofs 5


1. Introduction to Propositional Logic 7
1.1. Syntax of Propositional Logic . . . . . . . . . . . . . . . . . . 10
1.2. Parse Trees and Subformulas . . . . . . . . . . . . . . . . . . . 13

2. Deduction in Propositional Logic 19


2.1. Deductive Systems . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2. Natural Deduction . . . . . . . . . . . . . . . . . . . . . . . . . 23

3. Fitch-Style Deduction and Classical Truth 35


3.1. Fitch-Style Natural Deduction . . . . . . . . . . . . . . . . . . 36
3.2. Truth Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.3. Satisfiability and Validity . . . . . . . . . . . . . . . . . . . . . 46
3.4. Soundness and Consistency . . . . . . . . . . . . . . . . . . . 46
3.5. Classical Logic and Completeness . . . . . . . . . . . . . . . . 47

II. Semantics and Limits 53


A. Greek Letters 59

B. Tools 61
B.1. Sets and Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
B.2. Induction on Natural Numbers . . . . . . . . . . . . . . . . . . 61
B.3. Trees and Induction . . . . . . . . . . . . . . . . . . . . . . . . 64
B.4. Formal Languages . . . . . . . . . . . . . . . . . . . . . . . . . 67

C. Three-Valued Logic 71

D. Logic Programming 73
iv Contents

Index of Terminology and Notation 75


Prologue
Tok-tok-tok, Tok-tok-tok. The steady sound of the pen hitting the table drills
into Clara’s head. “Too slow!”, exclaims Professor Czerny. Clara starts sweat-
ing. “You will fail to finish the exam on time at that pace. You have to practice
more!”. “But professor, I have been practising to no end”, Clara responds in
despair. “Every time you give me an exercise, it is more difficult and you ex-
pect me to be even faster.” Czerny raises his eyebrows. “How do you think
you will get better at logic, if not through exercises?” Clara removes from her
forehead the sweat with the arm of her jumper. “Not through mind-numbing
exercises! How can I understand anything about logic if we never get past
repetitive exercises?” Czerny bursts into laughter: “Do you believe that know
better than me what there is to learn? Do you believe that you can acquire, let
alone appreciate, any complex subjects in logic if haven’t even learned how
to do the simplest tasks in your sleep? I dare you challenging me! For as long
as you are under my thumb, I determine what and how you learn!” Only now
Clara notices the sterile white room, where she is sitting at a single desk on a
hard wooden chair. Suddenly, the room starts stretching in front and behind
her. Logical symbols appear on the side of the room, next to her symbols she
has seen already but further away new ones appear and vanish into the dis-
tance with the stretching room. “You see”, Czerny exclaims, “I can make you
do these exercises for as long as I wish and you will not go anywhere, if you
don’t obey my rule.” Clara feels the room now expanding in all directions and
see that the floor as no bottom, no end, any longer. Her chest contracts and a
scream bursts out.

The Visitor
Clara opens her eyes. She finds herself in a small room, with some of the
dawning day coming in. Clara gasps for air and sits up, realising that she is
in her own bed. Cold sweat is running down to her chin and her pillow has a
wet silhouette of her head. “Phew, what I dream! I should not prepare so late
for my exams any more.”
2 Prologue

Suddenly, Clara realises that she is not alone, that there is something next to
her bed. She slowly turns the head and suppresses a scream upon putting her
eyes on the metallic object. It looks like a can with cylindrical arms and legs
sticking out, and a metal sphere on the head with something that looks like
closed eyes and mouth. At the end of the hanging arms are metallic hands
with fingers and the legs even have feet, making the object look like a can-
shaped, clumsy human. Now awake, Clara realises that the object next to her
bed is a robot. She is hesitant to move, afraid to bring the robot to life. Who
knows if it is dangerous!

A red lamp on the chest of the robot starts flashing in regular pulses in rhythm
with buzzing noises coming out of its interior. Clara jumps out of her bed and
hides behind her desk at the other end of the room. The robot comes to life,
his arms stretch under tension. Its empty eyes are opening, and Clara has
the impression to stare down into ever-expanding space. Crackling noises
are followed by a metallic voice: “Initial startup finished. Human presence
detected. I am Isaac, please identify yourself.”

“Human presence?”, Clara thinks to herself, horrified by the voice, “Does it


know it I am here?” Life comes into the eyes of the robot. Its pupils are
growing, the iris changes colour from black to violet to green. It looks at
Clara. “I am Isaac. Do not fear, I follow the three laws of robotics. Please
identify yourself.” Clara is unsure if the robot just wants to trick her but she
takes courage and responds: “My name is Clara.” The mimic of the robot
lightens up, it almost seems to smile. “It is a pleasure to meet you Clara!”
Clara is baffled by the friendliness the robot displays. “How did you get into
my room and what do you want here, Isaac?” Isaac turns the head around, as
if looking at his environment. “I do not know. According to my log files, I was
just activated. However, my self-scan shows a problem with the memory unit.
May I ask you to take a look? I open the hatch at the back of my head.” The
robot turns the head and a small hatch opens, giving view on electronic parts
that look nothing like what Clara has ever seen. She gets out from her hiding
spot and cautiously approaches the robot. “Nothing seems to be damaged
from what I can see, but I am no expert.”, she says. The hatch closes and
Isaac turns his head back. “A further self-scan shows that the memory unit
is responding to requests, but the responses appear scrambled to me. I can
only read the initial segment. It says: ‘Deduction system v2.2, 2060’.
Which year is it?” Clara is puzzled. “It is the year 2024. Are you saying that
your memory system is from the year 2060?” Isaac looks at Clara: “It appears
so. My production date, however, is 2050. This means that this is not my first
activation. But I don’t have any recollection of how I ended up here.”
3

Both fall silent. After some minutes of thinking, Clara has a realisation. “Did
you say ‘Deduction System’?”, she asks Isaac. “I did. Do you know anything
about that?” Isaac is surprised. “I am not sure, but I am currently taking a
course on logic and deduction systems play a big part in that course.”, Clara
explains. “You see, they allow the inference of new knowledge from exist-
ing knowledge in a way that is completely formal and can be understood by
computers.” Isaac is intrigued. “Computers like my brain, is that what you
are saying?” His eyes and face seem to express curiosity. “I don’t know. It
might be that you are working with the same form of deduction that appear
in my course. Maybe if you learn about loguc and deductions we can access
your memory and find out who you are. After all, I suppose that even the
computers of the future still follow the same logical principles.”
Part I.

Syntax and Proofs


1. Introduction to Propositional
Logic
Isaac: Ok, am convinced. What can you teach me about logic and deduction?
Clara: The problem is that I don’t know much, yet.
Isaac: Do you know someone who could help us?
Clara: Yes, but that is perhaps a bit far-fetched. We could go to the cradle of
philosophy and mathematics in Europe.

Clara just spoke out a name, when the light on Isaac starts flashing faster and
faster. Suddenly everything is dark and Clara feels like she is floating in the
air. After a few seconds, which felt like centuries, there is light again.
Aristotle: Who are you, entering my house with that metal construction?
Isaac: I’m not a metal construction! I’m a living and breathing robot!
Aristotle: By Zeus! It can speak!
Clara: Where are we? Who are you?
Aristotle: I am Aristotle and you are in Athens. Now, who are you?
Clara: Isaac, what happened? How did we get here and how come I can talk
to him?
Isaac: My Trans-O-Matic enables translation between any language known
to me, including ancient Greek, for everyone in my surrounding.
Aristotle: A robot, mh? What is that supposed to mean?
Clara: Of course, you cannot know. You may think of an αὐτόματον (automa-
ton) as Hephaestus created them, only that a robot can make decisions for
itself and act like a human, all within limits certainly.
Aristotle: How fascinating! May I keep you here for studying my friend?
8 1. Introduction to Propositional Logic

Isaac: Isaac is the name and I would prefer not to be studied, if you don’t
mind. Rather, Clara and I are trying to understand the roots of my being.
Clara: Indeed, I want to help Isaac to recover his memory, which seems to be
based on some logical deduction system. Unfortunately, I know very little
about logic and we are not sure where to start. That’s why we are here with
you, one of the founders of formal logic.
Aristotle: I see, I see.
Start where all logic starts: with simple deduction. Take, for example, the
following deduction. “If it rains and Socrates has no umbrella, then he gets
wet. Socrates has no umbrella and is not wet. Therefore, it does not rain.”

Clara: Do you always have to use old male Philosophers as examples?


Aristotle: These are very illustrative examples, are they not?
Clara: Well… Never mind! The point is thus, that we infer knowledge from
hypotheses and facts?
Aristotle: Yes, exactly. We shall therefore begin with the study of a simple
logic that allows us to express the relation between propositions and infer
knowledge from these relations. Let me introduce you to the syntax of a
logic, as modern logic more than 2000 years from now will understand it.

This logic is called propositional logic, and comprises propositional variables


and logical connectives that allow us to express relations between these vari-
ables as formulas. In section 1.1, we will see how these formulas are formed
precisely and how we can use them to formalise hypotheses and facts.
For the time being, we will help Isaac to identify the relevant fragments of
Aristotle’s example. Let us highlight in the example the propositions that
can be either true or false.
“If it rains and Socrates has no umbrella, then he gets wet.
Socrates has no umbrella and he does not get wet. Therefore,
it does not rain.”
If we write instead
• 𝑟 for it rains,
• 𝑢 for Socrates has no umbrella, and
• 𝑤 for he gets wet,
then we can easily rewrite the first sentence to
9

“If 𝑟 and 𝑢, then 𝑤.”

and can read it still in exactly the same way by replacing 𝑟 , 𝑢 and 𝑤 by the
corresponding phrases. We call 𝑟 , 𝑢 and 𝑤 propositional variables, as they
stand for propositions that can be either true or false. This is very important
to keep in mind: we reason about such variables independently of their truth,
and arguments need to be able to account for any possibility!

You will have noticed that the second sentence cannot be directly written with
the three propositional variables because it says “he does not get wet”, while
we only have the variable 𝑤 that stands for “he gets wet”. We will allow our-
selves to write “not” in front of a variable to negate what it says, that is, “not
𝑤” stands for “he does not get wet”. Natural language is full of ambiguities,
caused by different ways of reading and interpreting sentences. One of the
goals of formal logic is to prevent such ambiguities. With this in mind, we
can write the second and third sentence as

“𝑢 and not 𝑤. Therefore, not 𝑟 .”

What is left of the original sentence are only the words “if”, “then”, “and”,
“therefore” and “not”. These are logical connectives, and we will introduce
formal notations for them soon.

But let us appreciate for a moment that we have replaced in the original ex-
ample certain phrases by variables and obtained an argument that relies only
on the structure of the sentences, rather than their meaning. For instance, we
could reinterpret the variables like so:

• 𝑟 — Isaac’s battery is empty,

• 𝑢 — no charger is in reach, and

• 𝑤 — Isaac stops working,

This gives us another argument that is clearly valid:

“If Isaac’s battery is empty and no charger is in reach, then


Isaac stops working. No charger is in reach and Isaac does
not stop working. Therefore, Isaac’s battery is not empty.”

Understanding the steps of such arguments, or proofs, in general is the second


goal of logic. The third goal, as we will see, is to understand also the meaning
of formulas.
10 1. Introduction to Propositional Logic

1.1. Syntax of Propositional Logic


Syntax is the pillar of formal logic that allows us to express propositions un-
ambiguously. As we saw earlier, it consists for propositional logic of two parts:
propositional variables as atomic assertions and logical connectives to build
complex formulas. Propositional variables are syntactic entities that represent
propositions, but have no intrinsic meaning and only serve as placeholders. We
shall assume to be given a fixed set of such variables.

Assumption 1.1. Assume that PVar is a countable set of propositional


variables. Elements of PVar are denoted by lower-case letters, possibly with
index: 𝑝, 𝑞, . . . , 𝑝 0, 𝑝 1, . . .
As for the logic connectives, we will introduce symbols and formulas that
allow us to unambiguously express propositions. For instance, we will write
∧ for “and” and ¬ for “not”. With these notations and the variables 𝑢 and 𝑤
from above, the phrase
“Socrates has no umbrella and he does not get wet.”
becomes
𝑢 ∧ ¬𝑤 .
Note that there are cases, where it is not clear how a sentence has to be read
and we may have to use parentheses to disambiguate the reading. For example,
we will write → for “if …then …”. Then the sentence
“If it rains and Socrates has no umbrella, then he gets wet.”
can be written as
(𝑟 ∧ 𝑢) → 𝑤 .
However, such parentheses can get in the way and we would like to have some
reading conventions. We read, for example, the proposition 𝑢 ∧ ¬𝑤 intuitively
already as 𝑢 ∧ (¬𝑤). Thus, part of the definition of formulas will also be a
reading convention that allows us to unambiguously determine the structure
of formulas.

Definition 1.2
The propositional formulas 𝜑 of propositional logic are generated by the
following context free grammar, in which 𝑝 ranges over the propositional
variables PVar.

𝜑 ::= 𝑝 | ⊥ | 𝜑 ∧ 𝜑 | 𝜑 ∨ 𝜑 | 𝜑 → 𝜑 | (𝜑)
1.1. Syntax of Propositional Logic 11

The following reading conventions can be used to leave out parentheses.

• The connectives ∧ and ∨ have precedence over →.


• All connectives associate to the right.
The set of all propositional formulas is denoted by PForm. We denote ele-
ments of PForm by small Greek letters 𝜑,𝜓, 𝛾, . . . possibly with a subscript
index.
Connective Name Pronunciation Intuitive Meaning
⊥ Absurdity bottom ⊥ never holds
∧ Conjunction 𝜑 and 𝜓 both hold
∨ Disjunction 𝜑 or 𝜓 𝜑 or 𝜓 or both hold
→ Implication 𝜑 implies 𝜓 if 𝜑 holds, then 𝜓 holds

Table 1.1.: Logical connectives of propositional logic

As reading and understanding formulas may be difficult in the beginning, let


me provide some help. In table 1.1, all the connectives with their name, pro-
nunciation and intuitive meaning are gathered. The name is how we will refer
to a connective by itself, outside of formulas. To pronunciation we will use,
of course, the pronunciation column. If you have trouble with Greek letters,
then have a look at appendix A. The last column in the table also indicates
how the connectives can be understood intuitively. Keep in mind though that
this is only an intuition and the interpretation can vary radically for different
applications and semantics, one of which we will discuss in chapter 3.

“But wait”, Clara intervenes, “we are missing the negation, aren’t we?” Yes
indeed, we are. However, our intuition would dictate that the formula ¬𝜑
should hold only if 𝜑 does not hold. Or, in other words, whenever 𝜑 holds,
something went wrong and we discovered an absurd situation. We can ex-
press this by the formula 𝜑 → ⊥, saying that 𝜑 implies absurdity. Similarly,
we can also express other common logical connectives in terms of the basic
connectives.

Definition 1.3: Derived connectives


We define three derived connectives as short-hand notation for the formula
in second column of the following table.
12 1. Introduction to Propositional Logic

Connective Definition Name


¬𝜑 𝜑→⊥ Negation
⊤ ¬⊥ Truth or Top
𝜑 ↔𝜓 (𝜑 → 𝜓 ) ∧ (𝜓 → 𝜑) Bi-implication
We adopt the following conventions: negation has precedence over ∨ and
∧, thus also over →; and bi-implication has the same precedence as →.
Let us now come back to the original example.

Example 1.4
The deduction that it did not rain by experimenting with Socrates’ misery
consists of the following three formulas.

𝑟 ∧𝑢 → 𝑤 and 𝑢 ∧ ¬𝑤 and ¬𝑟

Replacing the corresponding phrase in the original sentence, we obtain


“𝑟 ∧ 𝑢 → 𝑤. 𝑢 ∧ ¬𝑤. Therefore, ¬𝑟 .”
We may read this as a deduction, where the sentence starting with “There-
fore” signifies the conclusion of the deduction, and everything before are
the made assumptions. This leaves us with two issues: First, how do we
know that this is a valid deduction and, second, how can we decide what
the assumptions and conclusions are? The second question does not have
a general answer due to the ambiguities of natural language. Indeed, the
whole point of propositional logic is to provide a formal tool form modelling
knowledge. One way to phrase the above sentence is as one formula:

(𝑟 ∧ 𝑢 → 𝑤) ∧ (𝑢 ∧ ¬𝑤) → ¬𝑟

However, we need to realise that this formula only expresses hypothetical


relations between propositions, whereas the deduction is intended to be
read as a valid relation. This brings us to the first question, what a valid
deduction is, to which there are two possible answers. One answer is that
deductions themselves are constructed through some “obviously correct”
rules, which will be pursued in chapter 2. Another answer is that the for-
mula above should be true in some sense. Figuring out which formulas are
true and which are not is the quest of semantics or meaning of formulas,
which is investigated in chapter 3 and will be the main concern of part II.

You should also appreciate that we can leave out parentheses by using our
reading conventions. Without them, the formula in example 1.4 would look
1.2. Parse Trees and Subformulas 13

Formula With parentheses


𝑝 →𝑞 →𝑟 𝑝 → (𝑞 → 𝑟 )
𝑝 ∧𝑞 ∧𝑟 𝑝 ∧ (𝑞 ∧ 𝑟 )
𝑝 ∨𝑞 ∨𝑟 𝑝 ∨ (𝑞 ∨ 𝑟 )
𝑝 ∧𝑞 → 𝑟 (𝑝 ∧ 𝑞) → 𝑟
𝑝 ∨𝑞 → 𝑟 (𝑝 ∨ 𝑞) → 𝑟
𝑝 →𝑞 ∧𝑟 𝑝 → (𝑞 ∧ 𝑟 )

Table 1.2.: Leaving out parentheses by using the precedences of connectives

like this: ( )
( ) ( )
(𝑟 ∧ 𝑢) → 𝑤 ∧ 𝑢 ∧ (¬𝑤) → (¬𝑟 )

What an abomination! I wish, I had these tools in the debates with my con-
temporary philosophers in the ancient Greek times.

Table 1.2 shows some more example, in which the reading conventions allow
us to leave out parentheses Note that there is no convention about mixing ∧
and ∨, as this would cause more confusion than it helps. For example, the
formula 𝑝 ∧ 𝑞 ∨ 𝑟 is considered to be ambiguous and should be written either
as (𝑝 ∧ 𝑞) ∨ 𝑟 or 𝑝 ∧ (𝑞 ∨ 𝑟 ).

1.2. Parse Trees and Subformulas

Isaac: I have the feeling that there may still be some ambiguity. How can I
know for sure in which order I have to process a formula?

Aristotle: Feelings? How …?

Isaac: Hey, no need to insult me!

Aristotle: My apologies! But you are a curious thing and I would like to ask
you so many questions.

In any case, there is a way to make everything absolutely unambiguous by


two-dimensional trees, the kind you have seen as data structures in computer
programs.
14 1. Introduction to Propositional Logic

Definition 1.5
The top-level connective and direct subformulas of formulas are given
as in table 1.3. A formula is called atomic if it has no direct subformulas,
that is, if it is of the shape ⊥ or 𝑝 for 𝑝 ∈ PVar.
Given a formula 𝜑, the parse tree of 𝜑 is a tree, in which

i) the root is labelled by the top-level connective of 𝜑, and


ii) its the children are parse trees of the direct subformulas of 𝜑.
We write Sub(𝜑) for the set of all formulas that appear in the parse tree of
𝜑, and call an element of Sub(𝜑) a subformula of 𝜑.

Formula Top-Level Connective Direct Subformulas


𝑝 𝑝 –
⊥ ⊥ –
𝜑 ∧𝜓 ∧ 𝜑,𝜓
𝜑 ∨𝜓 ∨ 𝜑,𝜓
𝜑 →𝜓 → 𝜑,𝜓

Table 1.3.: Top-Level Connectives and Direct Subformulas

When we picture such trees, we typically draw a circle for every node and
write the label inside this node. This allows to picture, for example, the for-
mula 𝑝 ∧ 𝑞 as

𝑝 𝑞

Three formulas occur in this tree: 𝑝, 𝑞 and 𝑝 ∧ 𝑞. Each of these is thus a


subformula of 𝑝 ∧ 𝑞 and we have Sub(𝑝 ∧ 𝑞) = {𝑝, 𝑞, 𝑝 ∧ 𝑞}.
Our formula from earlier can serve as a more elaborate example.

Example 1.6
Recall the formula

(𝑟 ∧ 𝑢 → 𝑤) ∧ (𝑢 ∧ ¬𝑤) → ¬𝑟

from example 1.4. The parse tree of this formula is given as follows.
1.2. Parse Trees and Subformulas 15

∧ →

→ ∧ 𝑟 ⊥

∧ 𝑤 𝑢 →

𝑟 𝑢 𝑤 ⊥

Note that the parse tree does not contain negations because this is a derived
connective. Instead, it is represented by the defining formula. For instance,
¬𝑟 becomes 𝑟 → ⊥ in the parse tree.
It is now easy to read off all subformulas from the tree:
( )
Sub (𝑟 ∧ 𝑢 → 𝑤) ∧ (𝑢 ∧ ¬𝑤) → ¬𝑟 = {
𝑟, 𝑢, 𝑟 ∧ 𝑢, 𝑤, 𝑟 ∧ 𝑢 → 𝑤,
⊥, 𝑤 → ⊥, 𝑢 ∧ (𝑤 → ⊥)
𝑟 → ⊥, (𝑟 ∧ 𝑢 → 𝑤) ∧ (𝑢 ∧ (𝑤 → ⊥))
(𝑟 ∧ 𝑢 → 𝑤) ∧ (𝑢 ∧ (𝑤 → ⊥)) → (𝑟 → ⊥)
}

Parse trees provide an unambiguous representation of formulas. They can


form the basis of the implementation of propositional logic on computers and
for formal proofs and semantics that you will meet in part II of the journey.

Clara: These are a lot of formalities for expressing very simple things! Is all
of this really necessary?

Aristotle: If you know how computers work, then you should be able to appre-
ciate the clarity of representing formulas as parse trees and you can imagine
how a computer could process formulas by traversing such trees. From my
perspective, the formalisation of knowledge as formulas is a great leap for-
ward. You know, history is full of misunderstandings caused by ambiguities
of natural language. The formal language of propositional logic does away
with this, as there is no ambiguity in formulas and we can clear up misun-
derstandings by elaborating formulas into parse trees.
16 1. Introduction to Propositional Logic

Isaac: Fantastic! Now I know how I can represent knowledge. But how can I
do anything with this knowledge?

Clara: And how do I know that the knowledge Isaac stores is correct?

Aristotle: These are excellent questions, which I will not be able to answer
to you as our time here is up. Let me just say that you, Isaac, will want to
learn how to deduce knowledge formally and that you, Clara, will have to
accept that there is know absolute truth but you have to study the semantics
of logic.

May Athena guide your way in finding answers to your questions, though
let me improve your odds and refer you to someone who may help.

Exercises

Exercise 1

Give the formula corresponding to the following parse tree. Remember that
negation is defined in terms of implication to absurdity: ¬𝜑 = 𝜑 → ⊥.

𝑝 →

→ ⊥

𝑞 →

𝑝 𝑞

Exercise 2

Decide which of the following strings is a propositional formula.


1.2. Parse Trees and Subformulas 17

a) 𝑝
b) ()𝑝
c) 𝑝 → 𝑞
d) 𝑝 → ∧𝑞
e) ¬𝑝
f) ⊥
g) 𝑝 → ()
h) ∨𝑝
i) ⊤ → ⊥
j) 𝑝 ∧ 𝑞 → 𝑝, 𝑞

Exercise 3

Formalise the following two sentences as a propositional formula. Indicate


clearly which part of the sentence each propositional variable refers to.

1. If it rains today, then it won’t rain tomorrow.

2. If Dick met Jane yesterday, they had a cup of coffee together, or they
took a walk in the park.

Exercise 4

For each of the following formulas of PL, draw its parse tree and list all its
subformulas.

a) 𝜑 0 = ¬(𝑝 → 𝑞 ∧ 𝑟 ) ∨ 𝑞 → 𝑝
b) 𝜑 1 = (𝑝 ∨ 𝑞 → 𝑟 ) ∧ ¬𝑞 → 𝑝 → 𝑟
c) 𝜑 3 = 𝑝 → 𝑞 ∨ (𝑟 ∧ ¬𝑠) → 𝑞

Hint: Before drawing the parse trees, change all ¬𝜑 to 𝜑 → ⊥, for any for-
mula 𝜑 and add parenthesis if needed to the formulas by following the reading
conventions.
18 1. Introduction to Propositional Logic

Exercise 5

Formalise the following two sentences as a propositional formulas. Indicate


clearly which part of the sentence each propositional variable refers to.
1. Today it will rain or shine, but not both.
2. De eerste getuige spreekt de waarheid of de tweede getuige spreekt niet
de waarheid.
2. Deduction in Propositional
Logic
Isaac and Clara have left the house of Aristotle.

Clara: I do understand now what propositional formulas are and how they
can be used to formalise knowledge, but I do not at all understand what is
going on. How did we get to Athens and to the time of Aristotle? This is
impossible!
Isaac: This is made possible by my Voyag-O-Matic unit, which enables trav-
elling through time-space.
Clara: Isn’t this dangerous? What if we alter the past and change the future?
Isaac: When we left the building, another of my units made sure that Aristo-
tle’s memory of our visit faded.
Clara: And if we change anything else?
Isaac: I will prevent this, as I have to follow the laws of robotics. Indeed, we
should not loose any time here to avoid any incident. Where can I bring us
next?
Clara: I think that we should learn more about formal proofs and I have an
idea where to go!

Clara closes her eyes in anticipation and the sensation of floating sets in. She
hears Isaac’s voice, first muffled and then coming closer. Clara opens the eyes
and she feels her weight coming back.

Isaac: What is going on here? Who are all these people sitting in the fields,
on the hills and in the garden of this villa?
Peano:1 Welcome to my home! These are the women working in the cotton
mills of Torino. They are asking for their rights: limiting working days to
1 Ken02; RL08.
20 2. Deduction in Propositional Logic

10 hours, lunch breaks, costs for working material should be covered by the
factory owner etc. etc.

Clara: Professor Peano, what a pleasure to meet you! These are indeed very
reasonable requests.

Isaac: It seems that these rights have been lost in our time.

Clara: Yes, that is unfortunately true. I guess that we could at some point
use logic and computers to prove that rights will remain unstable, unless
workers control their workspace and students their educational institution.

Peano: What a wonderful idea! Although, students used to be in charge at


the Universities of Bologna and Paris in the Middle Ages. In any case, what
brings you two to my home?

Clara: Our visit to Aristotle has taught us how to formalise propositions as


formulas. However, this left us dissatisfied.

Peano: Visit to Aristotle?

Clara: Well, ahm… I mean our visit to the works of Aristotle.

Peano: Ah, yes. Even though limited in scope, it contains many important
developments on mathematical reasoning, such as equality [Pea16]. But be
that as it may, in which way were you left dissatisfied?

Clara: One can write down whatever formula and has no assurance that it is
valid. How do we know that some formula is a theorem of arithmetic or a
valid deduction of other kind of knowledge?

Peano: In short, you are looking for proofs, rules of deductions, if you wish.
The Greeks had developed already such rules, which you certainly know
from Euclidean geometry. But I suppose that you are looking something
more… modern, perhaps?

Clara: Indeed, we do.

Peano: Formal proofs are what has guided my Formulario project. Please,
come to my terrace and enjoy the worm evening with me over a glass
of wine. I will tell you about a system that was conceived by Gerhard
Gentzen [Gen35], with whom I disagree politically, but who paved the way
for natural approaches to deduction.
2.1. Deductive Systems 21

Let us begin with what an informal, but rigorous, proof consists of. One of the
main issues, which had been identified already by the Indian logicians [Gan04,
Sec. 3.6 and Chap. 4], is that we need to be absolutely clear about the assump-
tions that we make and what we want to prove. Otherwise, we can easily
prove anything, like the existence of any kind of god, without providing any
actual evidence. Once this is stated, we can go state the proof method and
steps. You may enumerate the ingredients of a proofs thus as follows, but be
careful that this list is only an informal statement and may vary. It might be
beneficial to learn about proofs also from other people [Sol13].

I) Fix the background theory (axioms, definitions, deduction rules etc.)


II) State all the assumptions of the proposition (“Suppose that …”)
III) State the proposition to be proved
IV) State the proof method (“By induction on …”, “By contraposition …”)
and provide the proof steps that deduce the proposition from the as-
sumptions (involving the background theory)

Peano: You have perhaps informal proofs before, in which case you should be
able to match the above scheme to those proofs. Only that it will be difficult
to pin down the exact background theory used in those proof. How can we,
therefore, be sure that those proofs are correct?
Isaac: If they were is unambiguous format, then I could check all the steps
and, assuming that the computations did not go wrong anywhere, could
say with certainty that they are correct.
Clara: That would be great! But at the same time I would like to be able read
and write these proofs easily, as I would like to communicate them still with
other humans.
Peano: This is a very good point! Before we get to such natural deduction
systems, let me first start with making proofs unambiguous and introduce
you to deductive systems.

2.1. Deductive Systems

Over time, logicians have developed a vast range of deduction methods. What
I would like to focus on here are deductive systems that formalise proofs of
syntactic formulas. Here is a list, certainly incomplete, of such methods, in
22 2. Deduction in Propositional Logic

which the methods marked are based on tree representations of proofs. Trees
make proofs easy to understand, while still being suitable for automation.
• Axiomatic proof theory
• Natural deduction
• Sequent calculus
• Tableaux methods
• Uniform and focalised proofs
• Algebraic proof theory
• Type theory
• Category theory
In order to use a deductive systems in the formalisation of proofs, as outlined
above, we have to
I) fix a deductive system 𝐼 with background theory,
II) list our assumptions Γ,
III) represent the proposition as formula 𝜑, and
IV) construct a deduction of 𝜑 from Γ in 𝐼 , which is written as

Γ ⊢𝐼 𝜑 .

We call this relation a (hypothetical) judgement [PD01].


“Why hypothetical?”, Isaac is puzzled. Because the formula 𝜑 would be proven
under the hypotheses Γ. You may read Γ ⊢𝐼 𝜑 as “𝜑 is provable under the
hypotheses Γ”, but of course only once a deduction for this judgement has
been found.
Now the question is of course how a deductive system looks like and we will
soon get to that. But we should first understand that not any deductive system
may be useful. For example, we may find a deductive system questionable if
it allows us to prove that all cats fly or 1 + 1 = 1 in the natural numbers. As a
first step, we may require that it is not possible to prove anything absurd in 𝐼 ,
that is,
⊢𝐼 ⊥ is not provable in 𝐼 . (Consistency)
This requirement is, unfortunately, often to weak or maybe even undesirable
in some situations [Gan04; PTW18]. Thus, we usually give a reference that
2.2. Natural Deduction 23

determines which judgements may be provable. This, however, will have to be


deferred to since you currently have to further reference of when a deduction
of formula should be valid.

2.2. Natural Deduction

Clara: Fine, we know now what deductive systems are and what we desire of
them, but how does such a system look like concretely?
Peano: No worries, we get to that. We will start with a deductive system that
is very natural [Pra06] in the sense that deductions in that system corre-
spond to intuitive reasoning and, more importantly, deductions can be re-
duced to very direct proofs. This latter sense of natural enables automatic
deduction methods, but that is for another time. For the moment, let us
think about how we would naturally prove propositions.

The deductive system that will allows us to derive formulas in a human-readable


way is Gentzen’s system of natural deduction. This system is based on judge-
ments of the form Γ ⊢ 𝜑, where Γ is a list of (propositional) formulas and 𝜑 is
a single formula. In our case, we call Γ ⊢ 𝜑 a sequent and Γ a context.

Conjunction Let us begin with the case of conjunction and suppose that
we want to prove 𝜑 ∧ 𝜓 . Intuitively, we would expect this proposition to be
true if 𝜑 and 𝜓 are separately true. In the terminology of deductive systems,
we should thus be able to derive from the sequents

Γ ⊢𝜑 and Γ ⊢𝜓

the following sequent.


Γ ⊢ 𝜑 ∧𝜓
As it is such an natural step, we will make it a deduction rule:

Γ ⊢𝜑 Γ ⊢𝜓
(∧I)
Γ ⊢ 𝜑 ∧𝜓

This rules consists of hypotheses, the conclusion and a label to name the rule.
The hypotheses above the line are what we have to prove before we can apply
the rule. Once we have proved the hypotheses, we can deduce the conclusion
using the rule. We use labels to identify in proofs the rules that we use, which
24 2. Deduction in Propositional Logic

helps both readability and allows us to verify proofs. In the case of the rule
above, we used the label (∧I) and read it as “conjunction introduction”. The
conclusion of this rule is thus a conjunction that has been introduced. You
will have noticed that we used formulas 𝜑 and 𝜓 but did not say what these
formulas exactly are. In fact, the rule works for any formula and we can see
deduction rules as schemes that are agnostic to the structure of 𝜑 and 𝜓 . With
the rule (∧I), we are now able to prove conjunctions.
More generally, deduction rules are of the form
𝐽1 𝐽2 ··· 𝐽𝑛
(L)
𝐽
where all the hypotheses 𝐽1, . . . , 𝐽𝑛 and the conclusion 𝐽 are judgements, and
L is a rule label. The label indicates whether we are introducing a connec-
tive, like (∧I) above, or are eliminating a connective, which means that the
connective appears among the hypotheses.
Why would we need to eliminate a connective? Suppose that we know, for
example from the assumptions of our proposition, that 𝜑 ∧ 𝜓 holds. In this
case, we also know that 𝜑 and 𝜓 hold separately. This leads us to the following
two rules.
Γ ⊢ 𝜑 ∧𝜓 Γ ⊢ 𝜑 ∧𝜓
(∧E1 ) (∧E2 )
Γ ⊢𝜑 Γ ⊢𝜓
In these rules, the “E” stands for elimination and the number indicate which
of the conjuncts we would like to access.

Implication Suppose you know that some formula, say 𝜑, implies another
formula 𝜓 . Recall that implication is written as 𝜑 → 𝜓 . We may read this
formula as “if we know 𝜑, then we also know 𝜓 ”. This gives us actually a good
intuition for how to use, or eliminate, an implication. The intuition is that
if we can prove the condition (antecedent) of an implication, then we know
that the conclusion of the implication holds. To use this in natural deduction
proofs, we turn this idea into a rule:

Γ ⊢𝜑 →𝜓 Γ ⊢𝜑
(→E)
Γ ⊢𝜓

But how can we prove an implication? Also here we can use our intuition:
if we can prove 𝜓 under the assumption that 𝜑 holds, then this means that
“if we know 𝜑, then we also know 𝜓 ”. Since this is our intuitive reading of
2.2. Natural Deduction 25

𝜑:Γ Γ⊢⊥
(Assum) (⊥E)
Γ ⊢𝜑 Γ ⊢𝜑

Γ ⊢ 𝜑 ∧𝜓 Γ ⊢ 𝜑 ∧𝜓 Γ ⊢𝜑 Γ ⊢𝜓
(∧E1 ) (∧E2 ) (∧I)
Γ ⊢𝜑 Γ ⊢𝜓 Γ ⊢ 𝜑 ∧𝜓

Γ ⊢𝜑 Γ ⊢𝜓
(∨I1 ) (∨I2 )
Γ ⊢ 𝜑 ∨𝜓 Γ ⊢ 𝜑 ∨𝜓

Γ ⊢ 𝜑 ∨𝜓 Γ, 𝜑 ⊢ 𝛿 Γ,𝜓 ⊢ 𝛿
(∨E)
Γ ⊢𝛿
Γ, 𝜑 ⊢ 𝜓 Γ ⊢𝜑 →𝜓 Γ ⊢𝜑
(→I) (→E)
Γ ⊢𝜑 →𝜓 Γ ⊢𝜓

Figure 2.1.: Deduction Rules of the natural deduction system ND

the implication 𝜑 → 𝜓 , the deducing, or introducing, an implication adds the


antecedent 𝜑 to the list of assumptions Γ and asks then for a proof of 𝜓 from
Γ, 𝜑. As a rule, that looks like this:

Γ, 𝜑 ⊢ 𝜓
(→I)
Γ ⊢𝜑 →𝜓

Now we know how to add assumptions, but not yet how to use them. Let us
write 𝜑 : Γ if the formula 𝜑 appears in the list Γ. The rule for using an assump-
tion allows us to pick any formula from Γ and use it as proven proposition.
𝜑:Γ
(Assum)
Γ ⊢𝜑

Isaac is a bit puzzled: “These rules work each for one connective or assump-
tions, but how can we deduce anything complex with these rules?” Think
about what a deduction is: a combination of reasoning steps. This means that
a deduction will combine several rules together and, since some rules have
more than one hypothesis, and we will get something that looks like a tree.
26 2. Deduction in Propositional Logic

Definition 2.1: Natural deduction for propositional logic


Given a list Γ of formulas and a formula 𝜑, we call the pair Γ ⊢ 𝜑 a sequent.
We abbreviate the sequent Γ ⊢ 𝜑 to ⊢ 𝜑 if Γ is empty.
The rules of the system ND of natural deduction for propositional logic are
given in fig. 2.1. A deduction or proof tree for Γ ⊢ 𝜑 is a finite tree, such that
i) each node is labelled with a sequent and a rule label, such that the
children of the node are labelled with the hypotheses of that rule;

ii) the root of the tree is labelled with Γ ⊢ 𝜑; and


iii) the leaves of the tree are labelled with (Assum).
We say that 𝜑 is provable in context Γ in ND, if there is a deduction for
Γ ⊢ 𝜑. A formula 𝜑 is a theorem if ⊢ 𝜑 holds.
Figure 2.1 contains the rules for the two connectives that we did not discuss,
yet. The rule (⊥E) allows us to deduce anything from absurdity. This rule
is sometimes also referred to as the principal of explosion because, once ab-
surdity has been proven, anything is possible. Finally, there are the rules for
disjunction. It should be rather obvious why the introduction rules (∨I1 ) and
(∨I2 ) make sense, as 𝜑 ∨ 𝜓 holds whenever 𝜑 or 𝜓 holds. The elimination rule
(∨E) is a bit more complex. Underlying it is the idea that we want to prove 𝛿,
knowing that 𝜑 ∨𝜓 holds. As we cannot know which of the two it will be, we
have to make a case distinction and prove that 𝛿 holds in either case. If we
succeed, then 𝛿 can be deduced from 𝜑 ∨ 𝜓 .

Clara: This is like an if-then-else branch in programming, isn’t it?


Peirce: Indeed, branching or case distinction are very closely related to the
elimination of disjunction. You can think of it this way: The rule builds a
proof of 𝛿 that takes a proof of 𝜑 ∨𝜓 as input. To proceed, you make a case
distinction on this input and check whether it proves 𝜑 or 𝜓 . You can then
use this information to prove 𝛿 for both cases separately.
Clara: Mh, that would suggest that we can write proofs as programs.
Peirce: It does! And you may be delighted to hear that type theory concerns
itself with interpreting proofs as programs. This is very fascinating, but we
shall continue for the moment with natural deduction.

Definition 2.1 may look rather pedantic at first sight, but it really just for-
malises our intuition of proofs. To illustrate this, let us go through some sim-
ple examples.
2.2. Natural Deduction 27

Example 2.2
In this example, we show how the (Assum)-rule and (→I)-rule can be used
together to deduce ⊢ 𝑝 → 𝑞 → 𝑝. The formula 𝑝 → 𝑞 → 𝑝 also allows us
to “store” knowledge and is one of the building blocks of combinatory logic
in form of the K-combinator. Here is a deduction for this theorem:
𝑝 : 𝑝, 𝑞
(Assum)
𝑝, 𝑞 ⊢ 𝑝
(→I)
𝑝 ⊢𝑞 →𝑝
(→I)
⊢𝑝 →𝑞 →𝑝

How does this proof tree relate to the definition? Let us redraw it in a way
that can be more easily recognised as a tree.

𝑝, 𝑞 ⊢ 𝑝 (Assum)

𝑝 ⊢ 𝑞 → 𝑝 (→I)

⊢ 𝑝 → 𝑞 → 𝑝 (→I)

Even though this looks more like a list, we can still see everything that make
a proof tree according to definition 2.1. First of all, the root at the bottom of
the tree is labelled by the sequent ⊢ 𝑝 → 𝑞 → 𝑝 that we aim to prove and
the rule (→I) that is used to prove it. This rule demands that we move the
antecedent 𝑝 of the implication to the context and then prove the sequent
𝑝 ⊢ 𝑞 → 𝑝, which in turn is how the child of the root is labelled. Similarly,
we can check the labelling of this child. Finally, the leaf node is labelled
by (Assum). This shows that the tree indeed fulfils all the requirements
demanded by definition 2.1 to make it a valid proof tree.

Clearly, the second representation of the tree in example 2.2 is somewhat


sparse in information and less intuitive than assembling rules into a proof.
This is why we will work for now with the first presentation.
As writing 𝜑 : Γ in (Assum) is tedious, we allow ourselves to leave this out
whenever 𝜑 : Γ obviously holds. Moreover, as the label (Assum) can be in-
ferred, we will typically shorten the rule to

Γ ⊢𝜑

if the application of (Assum) is obvious. In non-obvious cases, we will still


28 2. Deduction in Propositional Logic

indicate the occurrence of 𝜑 in Γ.

?
Is the requirement iii) of definition 2.1 necessary?

Example 2.3
We deduce 𝑝 ∧ 𝑞 → 𝑟 ⊢ 𝑝 → 𝑞 → 𝑟 . This process is also known as Cur-
rying. As the sequents can get quite lengthy, let us name the assumptions
throughout the proof by defining Γ = 𝑝 ∧ 𝑞 → 𝑟 .

Γ, 𝑝, 𝑞 ⊢ 𝑝 Γ, 𝑝, 𝑞 ⊢ 𝑞
(∧I)
Γ, 𝑝, 𝑞 ⊢ 𝑝 ∧ 𝑞 → 𝑟 Γ, 𝑝, 𝑞 ⊢ 𝑝 ∧ 𝑞
(→E)
Γ, 𝑝, 𝑞 ⊢ 𝑟
(→I)
Γ, 𝑝 ⊢ 𝑞 → 𝑟
(→I)
Γ ⊢𝑝 →𝑞 →𝑟

As you can see, we have simplified the application of (Assum) in the two
right leaves, but added some information on the left leaf to make the proof
easier to read.
“Aristotle gave us… I mean we found an example in Aristotle’s work, of a
deduction. Can we justify that in natural deduction?”, Clara wonders. “That
example used negation and, if I remember correctly, negation was defined in
terms of implication and absurdity. Can we thus use the deduction rules for
those connectives to reason about negation and prove the said example?”

Absolutely! But I would advise to make your life simpler and first establish
some specific rules for negation inside the system ND. These rules would not
be part of the system itself but can be derived from it and can be used like
proof rules: they are admissible.

Definition 2.4
A rule
𝐽1 𝐽2 ··· 𝐽𝑛
(L)
𝐽
is admissible, if it is possible to construct from any deduction of the sequents
𝐽1, . . . , 𝐽𝑛 a deduction for the sequent 𝐽 .

Now we can make your life easier by providing admissible rules that determine
how negation can be handled in ND.
2.2. Natural Deduction 29

Theorem 2.5
The following two rules are admissible in ND.

Γ, 𝜑 ⊢ ⊥ Γ ⊢𝜑 Γ ⊢ ¬𝜑
(¬I) (¬E)
Γ ⊢ ¬𝜑 Γ ⊢𝜓
Proof. Let 𝜑 and 𝜓 be formulas. First, we suppose that Γ, 𝜑 ⊢ ⊥ holds. We
then obtain immediately a proof tree
Γ, 𝜑 ⊢ ⊥
(→I)
Γ ⊢𝜑 →⊥

and thus a proof tree for (¬I) because ¬𝜑 is defined to be 𝜑 → ⊥.


Next, suppose that there are proof trees for Γ ⊢ 𝜑 and Γ ⊢ ¬𝜑. We can build
the following proof tree by using the definition of ¬𝜑 in terms of 𝜑 → ⊥.
Γ ⊢𝜑 Γ ⊢ ¬𝜑
(→E)
Γ⊢⊥
(⊥E)
Γ ⊢𝜓

This shows that both, (¬I) and (¬E), are admissible in ND. □

Using these two rules, we can easily prove sequents that involve negation. In
particular, we can prove the statement by Aristotle!

Example 2.6: It does not rain for Socrates


Let Γ = 𝑟 ∧ 𝑢 → 𝑤, 𝑢 ∧ ¬𝑤. The following is a proof for Γ ⊢ ¬𝑟 .

Γ, 𝑟 ⊢ 𝑢 ∧ ¬𝑤
(∧E1 )
Γ, 𝑟 ⊢ 𝑟 Γ, 𝑟 ⊢ 𝑢
(∧I)
Γ, 𝑟 ⊢ 𝑟 ∧ 𝑢 → 𝑤 Γ, 𝑟 ⊢ 𝑟 ∧ 𝑢 Γ, 𝑟 ⊢ 𝑢 ∧ ¬𝑤
(→E) (∧E2 )
Γ, 𝑟 ⊢ 𝑤 Γ, 𝑟 ⊢ ¬𝑤
(¬E)
Γ, 𝑟 ⊢ ⊥
(¬I)
Γ ⊢ ¬𝑟
“I would like to see also some proofs that involve the elimination of disjunc-
tion.”, says Isaac. “Could we go over that as well?”

Most certainly. Here is an interesting example that shows that, if we can ex-
clude on alternative of a disjunction, then the other necessarily has to hold.
30 2. Deduction in Propositional Logic

Example 2.7
We put Γ = 𝑝 ∨ 𝑞, ¬𝑞 and prove Γ ⊢ 𝑝 by means of the following deduction.

Γ, 𝑞 ⊢ 𝑞 Γ, 𝑞 ⊢ ¬𝑞
(¬E)
Γ ⊢ 𝑝 ∨𝑞 Γ, 𝑝 ⊢ 𝑝 Γ, 𝑞 ⊢ 𝑝
(∨E)
Γ⊢𝑝

Note that (∨E) acts like a case distinction and we show in the second, were
we suppose that 𝑞 holds, that this case is absurd. This allows us to show
that Γ ⊢ 𝑝 must hold.

In the final example, we show that the negation of a disjunction is very similar
to the conjunction of the negation of the subformulas of the disjunction. The
result is a proof of the equivalence ¬(𝜑 ∨ 𝜓 ) ↔ (¬𝜑 ∧ ¬𝜓 ) know as one of de
Morgan’s laws.

Example 2.8
Let 𝜑 and 𝜓 be arbitrary formulas. We first prove ¬(𝜑 ∨ 𝜓 ) ⊢ ¬𝜑. To avoid
writing too much, let us define Γ = ¬(𝜑 ∨𝜓 ), 𝜑. Then we have the following
derivation.
Γ ⊢𝜑
(∨I1 )
Γ ⊢ ¬(𝜑 ∨ 𝜓 ) Γ ⊢ 𝜑 ∨𝜓 (𝐷 1 )
(¬E)
Γ⊢⊥
(¬I)
¬(𝜑 ∨ 𝜓 ) ⊢ ¬𝜑
Similarly, we can find a derivation 𝐷 2 for ¬(𝜑 ∨𝜓 ) ⊢ ¬𝜓 and thus, by putting
these derivations together, we get the following derivation.
𝐷1 𝐷2
¬(𝜑 ∨ 𝜓 ) ⊢ ¬𝜑 ¬(𝜑 ∨ 𝜓 ) ⊢ ¬𝜓 (𝐷 3 )
(∧I)
¬(𝜑 ∨ 𝜓 ) ⊢ ¬𝜑 ∧ ¬𝜓

For the other direction, we use Γ = ¬𝜑 ∧ ¬𝜓, 𝜑 ∨𝜓 and define then Γ1 = Γ, 𝜑


2.2. Natural Deduction 31

and Γ2 = Γ,𝜓 . With these, we have the following derivation 𝐷 4 .

Γ1 ⊢ ¬𝜑 ∧ ¬𝜓 Γ2 ⊢ ¬𝜑 ∧ ¬𝜓
Γ1 ⊢ ¬𝜑 Γ1 ⊢ 𝜑 Γ2 ⊢ ¬𝜓 Γ2 ⊢ 𝜓
(¬E) (¬E)
Γ ⊢ 𝜑 ∨𝜓 Γ1 ⊢ ⊥ Γ2 ⊢ ⊥
(∨E)
Γ⊢⊥
(¬I)
¬𝜑 ∧ ¬𝜓 ⊢ ¬(𝜑 ∨ 𝜓 )

Finally, we obtain a proof for de Morgan’s law as follows.


𝐷3 𝐷4
¬(𝜑 ∨ 𝜓 ) ⊢ ¬𝜑 ∧ ¬𝜓 ¬𝜑 ∧ ¬𝜓 ⊢ ¬(𝜑 ∨ 𝜓 )
(→I) (→I)
⊢ ¬(𝜑 ∨ 𝜓 ) → ¬𝜑 ∧ ¬𝜓 ⊢ ¬𝜑 ∧ ¬𝜓 → ¬(𝜑 ∨ 𝜓 )
(∧I)
⊢ (¬𝜑 ∧ ¬𝜓 ) ↔ ¬(𝜑 ∨ 𝜓 )

?
Can you give admissible introduction and elimination rules for the
truth proposition ⊤?

“What about the equivalence ¬(𝜑 ∧ 𝜓 ) ↔ (¬𝜑 ∨ ¬𝜓 )?”, Clara wonders. That
equivalence is surprisingly more delicate and it turns out that to only on direc-
tion is provable within the proof system ND, see exercise 7, while the other
direction is not! Clara is really puzzled now: “How can we know that and
why should it not be provable? And how can we then be sure that our proofs
are correct?” These are indeed a valid questions that we have glossed over so
far. The first question we cannot answer easily at this stage because we would
need to understand first how to prove properties of deductions, which we shall
leave for another time. The second question touches on, what you may call,
logical pluralism: the idea that truth and provability is relative. “Does it then
even make sense ask whether a proof is correct?”, Clara gets even more con-
fused. Of course it does make sense but you need to understand that this is
relative to a chosen deduction system, like the system ND that we have dis-
cussed. So to answer your third question: Once we have fixed a deduction
system, we only need to check in a proof that all steps are correct. This means
checking that a given deduction is the form of a proof tree according to defi-
nition 2.1, which in turn is an entirely mechanical task. In fact, I have heard
rumours that machines exist that can be fed a representation of a potential
proof tree and then compute whether it is an actual proof tree. This would be
of great help to my Formulario project, since anyone who trusts the verifying
32 2. Deduction in Propositional Logic

machine can run it on the proof collection!

Peano: My friends, we had wonderful evening but I am getting tired now. It


seems that all the workers have gone home by now and I will have to clean
up my garden. But it was certainly worth it! (Peano laughs in great content.)

Clara: Thank you so much for your time and patience Professor Peano. We
have greatly enjoyed our time with you and learned many things, even be-
yond formal logic.

Isaac: If only all humans were as kind and generous as you are Professor, then
I may grow to like your kind.

Peano: I don’t know what you have seen in your future, but never lose hope
that the good in humans will prevail and a society of true solidarity will
emerge. You, as a robot, may have the chance to exist that long.

Isaac: I suppose that you have never seen the film WALL-E?

Peano: Now, please excuse me. I wish you all the best for your journey!

Isaac and Clara: Goodbye, Professor!

Clara and Isaac leave the vicinity of the Peano’s villa. “The conversation with
Professor Peano was very illumin…”, Isaac freezes mid-sentence. “What is
it?”, but Clara gets no response. “♪ Il capo in piedi col suo bastone, o bella
ciao bella ciao bella ciao ciao ciao ♪”, a group of singing women is walking in
their direction. Clara starts to panic. What if they see us? This may disrupt
history! She looks around her and finds clothes drying in a garden. Clara
grabs bed linen and throws them over Isaac. One of the ladies in the passing
group addresses Clara: “Leave your work. Come with us!”. It seems that the
Trans-O-Matic is still working. “No, I still have this pile of linen to hang.” The
lady comes closer. “I help you and then you can come with us.” Clara starts
sweating.

“Look, there is finally the cart with the drinks!”, another person in the group
exclaims. All attention goes away from Clara and to the cart. “Finally, I am
dying of thirst!”. The lady turns away from Clara. “Come over and have some
drink with us”, she says and moves with the group to the cart. “I come, just
moment!”, Clara shouts over.

“Isaac, it is time to leave”, she whispers. A buzzing sound under the linen
becomes audible and Clara starts to feel weightless. She closes her eyes.
2.2. Natural Deduction 33

Exercises

Exercise 6

Derive the following sequents.


a) 𝑝 ∧ 𝑞 ⊢ 𝑞 ∧ 𝑝 b) 𝑝 → (𝑝 → 𝑞), 𝑝 ⊢ 𝑞
c) 𝑝 → 𝑞 ⊢ 𝑝 ∧ 𝑠 → 𝑞 ∧ 𝑠 d) (𝑝 ∧ 𝑞) ∨ 𝑟 ⊢ 𝑞 ∨ 𝑟
e) (𝑝 ∧ 𝑞) ∧ 𝑟 ⊢ 𝑝 ∧ (𝑞 ∧ 𝑟 )

Exercise 7

Let 𝜑 and 𝜓 be formulas. Derive the following sequents in the system ND.
a) 𝜑 ∧ 𝜓 ⊢ 𝜑 b) 𝜑 ∧ 𝜓 ⊢ 𝜓
c) ¬𝜑 ∨ ¬𝜓 ⊢ ¬(𝜑 ∧ 𝜓 )

Exercise 8

State and prove admissible introduction and elimination rules for bi-implication.

Answers to the Quizzes


Answer to quiz on page 28 No, the requirement is redundant, because the
first requirement already asks for the correct rule label and the (Assum)-rule
has no further sequents that have to be proven.

Answer to quiz on page 31 The truth proposition has, dually to absurdity,


only an introduction rule.
3. Fitch-Style Deduction and
Classical Truth
Clara feels a thick carpet under her feet and the smell of coffee enters her
nostrils. She opens her eyes and finds herself on an armchair in a room with
walls covered in books. The interior of the room is simple but equipped with
a fireplace, another armchair, a coffee table and a heavy oak desk with a rush
chair, giving the room a warm feeling. Where the room forms a half circle,
two windows give view on green land and some trees outside. Several doors
connect the room with the remaining house.
But where is Isaac? Voices enter through a sliding door from the neighbouring
room. Clara gets out of the chair and sneaks up to the door to listen. She hears
Isaac talking to someone and cautiously opens the door. In the room Isaac is
talking to an old man with an enormous white beard, who is sitting on the
bench of a grand piano. Isaac turns his head: “There she is! Are you feeling
better?”

Clara: What happened?


Isaac: It seems that our little trip tired you, as you slept for almost 4 hours!
Clara: I certainly feel better. Where are we?
Peirce: The name is Peirce1 . Welcome to our home Arisbe.2 Would you like a
coffee?
Clara: Thank you! My name is Clara. Yes, a coffee would be wonderful.
Peirce: I will get it from the kitchen, please make yourself comfortable in the
study. [Peirce leaves the music room.]
Clara: Isaac, what on Earth happened? You stopped working for a moment
and now we are here with, I presume, Charles Sanders Peirce.
1 Bre98; BP21.
2 Bat83.
36 3. Fitch-Style Deduction and Classical Truth

Isaac: Indeed, that Peirce. I was able to recover some parts of my memory
with the understanding of natural deduction and propositional logic. It
seems that accessing the memory and deducing knowledge from it occu-
pied all my resources, making me unresponsive. However, I got stuck on
some more complex parts of the memory that I do not understand. It is
then that I recovered and brought us here. In fact, the name Peirce came up
several times when I investigated my memory for more knowledge of logic.
[Peirce enters the study room, holding a tray with cups, a pot with steaming
coffee and some cookies.]
Peirce: Please come over here and take a seat. [He serves the coffee.] Isaac
explained me already that you are looking to understand logic, but then we
got a bit side-tracked on the meaning of symbols. So please tell me, what is
it that you would like to know.
Clara: We know how to express propositions as syntactic formulas and how
to formalise proofs in the system of natural deduction. However, we are
somewhat wary about the usability of natural deduction because the proofs
get so large and, more importantly, we are sceptical about the correctness
of the proof system. I mean, the rules are somewhat intuitive, or “natural”,
but I am more used to think in the categories of true and false.
Peirce: Good, you are following the first rule of logic: the desire to learn.3 You
may want to part from this dichotomy of truth eventually, but we come back
to that. For now, let us see if we can make natural deduction more practical.

3.1. Fitch-Style Natural Deduction


Eventually, you may want to use interactive proof systems, in which you can
focus on parts of a proof and get even help from a computer, but let us, for
the moment, continue to write proofs on paper to understand them better.
There is a representation of natural deduction proofs, called Fitch style or flag
style [Fit52; NG14], that avoids excessive growth in width, and makes the con-
struction and presentation of large proofs easier. However, it has the draw-
back that the conclusion of a proof rule is no longer directly preceded by all
its hypotheses and writing proofs requires either good planning or computer
support. With a bit of discipline and experience, the second issue will have
3 “Upon this first, and in one sense this sole, rule of reason, that in order to learn you must
desire to learn, and in so desiring not be satisfied with what you already incline to think,
there follows one corollary which itself deserves to be inscribed upon every wall of the city
of philosophy, Do not block the way of inquiry.” [Pei98].
3.1. Fitch-Style Natural Deduction 37

negligible impact. The first issue is more severe, but we shall introduce a for-
mat for proofs that allows for a bit of two-dimensional structure and features
numbered references to help readability.
Consider, for example, the proof tree from example 2.3. This proof tree is
shown below on the left. Let us draw a tree that corresponds to this proof tree
but only has numbers as labels. You may find this tree on the right:

2 3
Γ, 𝑝, 𝑞 ⊢ 𝑝 Γ, 𝑝, 𝑞 ⊢ 𝑞
(∧I) 1 4
Γ, 𝑝, 𝑞 ⊢ 𝑝 ∧ 𝑞 → 𝑟 Γ, 𝑝, 𝑞 ⊢ 𝑝 ∧ 𝑞
(→E) 5
Γ, 𝑝, 𝑞 ⊢ 𝑟
(→I)
Γ, 𝑝 ⊢ 𝑞 → 𝑟 6
(→I)
Γ ⊢𝑝 →𝑞 →𝑟 7

As you can imagine, we can make a list of all the nodes and refer to them by
their numbers instead of drawing the edges. If you have ever implemented
trees with pointers or arrays, then this will be very familiar. Also, if you ever
read some classical text on mathematical logic, then you will see that proofs
are often represented in this way. But just listing the proof steps and using
numbered references misses the introduction of premises. For instance, we
introduce a new assumption in node 7 that we will have to record. This leads
us to Fitch-style proofs, which combine numbered proof steps and so-called
flags that indicate the introduction of assumptions. The above proof tree looks
like this in Fitch-style:

1 𝑝 ∧𝑞 → 𝑟
2 𝑝
3 𝑞
4 𝑝 ∧𝑞 ∧I, 2, 3
5 𝑟 →E, 1, 4
6 𝑞 →𝑟 →I, 3–5
7 𝑝 →𝑞 →𝑟 →I, 2–6

Fitch-style proofs use as numbering schemes the line numbers of a proof, go-
ing from top to bottom. The mentioned flags are in line 1, 2 and 3, and we
38 3. Fitch-Style Deduction and Classical Truth

say that we open flags in those lines. A flag indicates that everything proved
within it, requires the premises written on top of that flag. For instance, the
whole proof ranging from line 2 to 7 requires the premise 𝑝 ∧ 𝑞 → 𝑟 in line 1,
the proof ranging from line 3 to 6 requires the premise 𝑝 in line 2 and so forth.
We also indent flags a bit and retain some two-dimensional elements, which
helps readability a lot. Except for premises made, we indicate on the right of
a Fitch-style proof the proof rule that we use to obtain the formula in that line
together with the involved line numbers.

Isaac is a bit confused: “This is a very compact presentation of proofs! But


why do we write ‘2,3’ in line 4 but ‘3–5’ in line 6?” The answer lies in the
proof rules that are used. Notice that (∧I) does not introduce new premises
and merely combines two proven formulas into a conjunction. In contrast,
the introduction of implication makes a new hypothesis: to prove 𝑞 → 𝑟 , we
assume 𝑞 and deduce 𝑟 . This what the flag ranging from line 3 to 5 indicates.
Thus, the conclusion of 𝑞 → 𝑟 in 6 needs to refer to the whole sub-proof that
ranges from line 3 to 5. In other words: Whenever we change our premises Γ
and prove the resulting sequent, then we have to create a sub-proof by opening
a flag, and then we have to refer to that whole sub-proof.

Clara begins to understand, but still has an issue:“Looking back at the rules
of ND in fig. 2.1, it appears that also the (∨E)-rule changes the premises, but
for two sub-proofs! How do we deal with that?” In exactly the same way!
Recall that we proved 𝑝 ∨ 𝑞, ¬𝑞 ⊢ 𝑝 in example 2.7. First of all, note that this
judgement has two premises, which we will write on two separate lines in the
proof. The goal of the proof is then to prove 𝑝 from the these two premises:

1 𝑝 ∨𝑞
2 ¬𝑞
3 𝑝
4 𝑝 Assum, 3
5 𝑞
6 𝑝 ¬E, 5, 2
7 𝑝 ∨E, 1, 3–4, 5–6

The (∨E)-rule has been used in this proof on line 7 and refers to the disjunction
in line 1 that is eliminated and to the two sub-proofs in lines 3–4 and 5–6.
3.1. Fitch-Style Natural Deduction 39

In general, a proof in Fitch-style for a sequent 𝜑 1, . . . , 𝜑𝑛 ⊢ 𝜓 will always have


the following outline.

1 𝜑1
..
2 .
𝑛 𝜑𝑛
..
𝑛+1 .
𝛾
..
.
𝑚 𝛿
..
𝑚+1 .
𝑘 𝜓 L

Everything above the horizontal line that opens a flag are the assumptions
or hypotheses, which can be used inside the flag and will be discharged, once
the flag closes. For example, 𝜑 1, . . . , 𝜑𝑛 are the assumptions under which 𝜓
holds, and 𝛾 is the assumption that is used to prove 𝛿. With line 𝑚, 𝛾 will be
discharged and is no longer usable from line 𝑚 + 1 on. This corresponds to
proving the sequent 𝜑 1, . . . , 𝜑𝑛 , 𝛾 ⊢ 𝛿 by means of a rule like (→I), as we did
in the first Fitch-style proof above in lines 6 and 7. In the same proof, you can
also see that flags can be nested arbitrarily. Finally, every line, which is not a
hypothesis, needs to be given a label L that states used the rule and the lines
that contain the information necessary to apply this rule.

We will not give a precise definition of Fitch-style proofs because this is rather
cumbersome and instead rely on intuition build through examples. Now Clara
is a bit worried: “Is there no way to ensure that a Fitch-style proof is correct?”
Of course! Fitch-style proofs are just a different way of writing ND-proofs,
which means that, if it is possible to translate a given Fitch-style proof into
a correct ND-proof, then also the Fitch-style proof will be correct. However,
a better way is to leave the checking to a computer. For now let us continue
with some examples.
40 3. Fitch-Style Deduction and Classical Truth

Example 3.1: It does not rain for Socrates — Fitch-style


In example 2.6, we have proved the sequent 𝑟 ∧ 𝑢 → 𝑤, 𝑢 ∧ ¬𝑤 ⊢ ¬𝑟 using
proof trees. Here is now the same proof in Fitch-style:

1 𝑟 ∧𝑢 → 𝑤
2 𝑢 ∧ ¬𝑤
3 𝑟
4 𝑢 ∧E, 2
5 𝑟 ∧𝑢 ∧I, 3, 4
6 𝑤 →E, 1, 5
7 ¬𝑤 ∧E, 2
8 ⊥ ¬E, 6, 7
9 ¬𝑟 ¬I, 3–8

You should be able to match all the steps to the original proof. One small
difference is that we do not indicate any longer which conjunct we want
to obtain from the application of the elimination rules for the conjunction.
However, it is easy to recover the indices by inspecting the involved for-
mulas. We will allow ourselves the same simplification for the introduction
rules of the disjunction.

?
Which of the following two Fitch-style proof attempts is correct? If
any, which sequent does it prove?

1 𝑝 1 𝑝
2 𝑞 2 𝑟
3 𝑝 Assum, 1 3 𝑟 ∧𝑝 ∧I, 1, 2
4 𝑞→𝑝 →I, 2–3 4 𝑟 → 𝑝 ∧𝑟 →I, 2–3
3.2. Truth Values 41

3.2. Truth Values

Now that we have a more convenient way of presenting and constructing


proofs, let us come to the second issue: the correctness of the system ND it-
self. Before we get to that, we need to understand, however, that correctness
is a relative term. To say that a proof is correct, or sound, pertains to an un-
derstanding of the meaning of formulas and their relations. We thus need to
first develop such an understanding before we can even state what it means
for ND to be sound. There is a whole area of semantics for logic, which stud-
ies different meanings of formulas and their relations. But let us focus on a
prototypical example: Boolean truth values and truth tables.
For every propositional formula, we can try to understand under what con-
ditions it represents a correct or incorrect, a true or false, proposition. This is
simplest way of assigning truth values to formulas.

𝑝 𝑞 𝑝 ∧𝑞
0 0 0
0 1 0
1 0 0
1 1 1

Table 3.1.: Truth table of conjunction

Example 3.2
Let 𝜑 be the formula 𝑝 ∧ 𝑞 for some propositional variables 𝑝 and 𝑞. What
would the truth value of 𝜑? Is 𝜑 true or false? That depends entirely on
the truth values of 𝑝 and 𝑞, since propositional variables have no intrinsic
meaning. Let us, for simplicity, write 0 instead of false and 1 instead of true.
We could choose another notation, but this particular notation is short and
will turn out to be beneficial, not the least because it corresponds to bits
and voltage levels in your circuits, Isaac!
Now think about the case that 𝑝 and 𝑞 are both true, thus we assume they
have both 1 as truth value. What would be the truth value of the conjunc-
tion 𝑝 ∧ 𝑞? Clearly, as 𝑝 and 𝑞 are true, so should be 𝑝 ∧ 𝑞 (read ∧ out as
“and”). Thus, if 𝑝 and 𝑞 have the truth value 1, the 𝜑 should have as well
the truth value 1.
How about that case that one of them, say 𝑞, is false and has the truth value
0? Then we end up with the question whether “true and false” should be
true or false. As it is not possible that something is true and false at the
42 3. Fitch-Style Deduction and Classical Truth

same time, we deduce that 𝑝 ∧ 𝑞 is false in this case and the formula 𝜑 has
the truth value 0.
We can continue like this and prepare the small table 3.1. This table lists
all the possible values that the variables 𝑝 and 𝑞 can take, together with
the truth value of 𝑝 ∧ 𝑞 that results from these values. Thus, the first two
columns list all possibilities, while the last column is computed [Ane12] or
deduced from the first column.
?
Suppose we prepare a truth table for the formula 𝑝∧𝑞∧𝑟 with three dif-
ferent propositional variables. How many rows would the table have?

Surely, we could devise truth tables for all kinds of formulas by hand, but what
are truth tables in general and how do they relate to the meaning of formulas?
To understand this, let us consider each row of the truth table in table 3.1
separately. Every row assigns truth values to the variables that appear in the
formula and then determines the truth values of the overall formula. Note that
it does not matter what other variables, which do not appear in the formula,
have as truth value. The following definition can be thought of formalising
the values of variables in one row of a truth table.

Definition 3.3
We define the set of (Boolean) truth values B by B = {0, 1}. Given a set
𝑈 ⊆ PVar of propositional variables, a 𝑈 -valuation for 𝜑 is a map 𝑣 : 𝑈 → B.
Example 3.4
Let 𝑈 = {𝑝, 𝑞} and thus a valuation for 𝑈 has to assign truth values to 𝑝
and 𝑞. For instance, one possible valuation 𝑣 : 𝑈 → B could be given by

𝑣 (𝑝) = 1 and 𝑣 (𝑞) = 0 .

This valuation corresponds to the third row of the truth table for conjunc-
tion in example 3.2.

Let us look at an example with variables coming from a more complex for-
mula.

Example 3.5
Recall that (𝑟 ∧ 𝑢 → 𝑤) ∧ (𝑢 ∧ ¬𝑤) → ¬𝑟 appeared in example 1.4. Let
𝑈 = {𝑟, 𝑢, 𝑤 }, which consists of all variables that appear in 𝜑 One possible
3.2. Truth Values 43

𝑈 -valuation 𝑣 : 𝑈 → B could be given by

𝑣 (𝑟 ) = 1 𝑣 (𝑢) = 0 𝑣 (𝑤) = 1 .

The examples 3.4 and 3.5 have two important aspects: First, the set of variables
𝑈 is finite in each case. This will allow us to list all possible valuations for 𝑈
as a table, as in table 3.1. Second, the set contains all variables that are used
in particular formulas. Thus, a valuation will in each case fully determine the
Boolean values for said formulas. We will organise these into convenient truth
tables To say what a truth table is, we need to first define how the value in
one row is determined.

Definition 3.6
We define the semantic implication as binary operation =⇒ : B×B → B
on truth values 𝑥, 𝑦 ∈ B by the following case distinction.
{
1, 𝑥 ≤ 𝑦
𝑥 =⇒ 𝑦 =
0, otherwise

Let 𝜑 be a propositional formula and write var(𝜑) for the set of proposi-
tional variables that occur in 𝜑. Given a formula 𝜑, a set 𝑈 of propositional
variables with var(𝜑) ⊆ 𝑈 , and a 𝑈 −valuation 𝑣, we define the Boolean
propositional semantics J𝜑K(𝑣) of 𝜑 at 𝑣 in terms of its top-level con-
nective and semantics of its direct subformulas as follows.
J𝑝K(𝑣) = 𝑣 (𝑝)
J⊥K(𝑣) = 0
J𝜑 1 ∧ 𝜑 2 K(𝑣) = min{J𝜑 1 K(𝑣), J𝜑 2 K(𝑣)}
J𝜑 1 ∨ 𝜑 2 K(𝑣) = max{J𝜑 1 K(𝑣), J𝜑 2 K(𝑣 }
J𝜑 1 → 𝜑 2 K(𝑣) = J𝜑 1 K(𝑣) =⇒ J𝜑 2 K(𝑣)

The intuition behind definition 3.6 is as follows. Since falsehood is represented


by the Boolean value 0 and truth by 1, the truth value of 𝜑 1 ∧ 𝜑 2 must be 0 if
one of the subformulas 𝜑 1 and 𝜑 2 has value 0, and may only get the value 1
if both subformulas have value 1, see table 3.1. This corresponds precisely to
taking the smallest of the truth value, that is, their minimum as truth value of
their conjunction.

Example 3.7
Let 𝜑 be the formula 𝑝 ∧ 𝑞. We have var(𝜑) = {𝑝, 𝑞} and can define a
44 3. Fitch-Style Deduction and Classical Truth

var(𝜑)-valuation 𝑣 by 𝑣 (𝑝) = 1 and 𝑣 (𝑞) = 0. With this definition

J𝑝 ∧ 𝑞K(𝑣) = min{J𝑝K(𝑣), J𝑞K(𝑣)} = min{𝑣 (𝑝), 𝑣 (𝑞)} = min{1, 0} = 0 .

This corresponds to the third row of table 3.1.

Dually to conjunction, a disjunction is true if one (or both) subformulas are


true, which amounts to taking the maximum of the truth values of the for-
mulas. Finally, the intuition behind the interpretation of implication is that
𝜑 1 → 𝜑 2 expresses that 𝜑 2 must evaluate to be true whenever 𝜑 1 does. Seman-
tic implication realises this idea by making 𝑥 =⇒ 𝑦 true whenever 𝑥 ≤ 𝑦.

We see in example 3.7 that we can list all possible valuations for a finite set
of variables and the semantics of a formula as a table. For instance, let 𝑈 =
{𝑝, 𝑞} and 𝜑 a formula with var(𝜑) ⊆ 𝑈 . Table 3.2 lists all 𝑈 -valuations and

𝑝 𝑞 𝜑
0 0 J𝜑K(𝑣 1 ) for 𝑣 1 (𝑝) = 0, 𝑣 1 (𝑞) =0
0 1 J𝜑K(𝑣 2 ) for 𝑣 2 (𝑝) = 0, 𝑣 2 (𝑞) =1
1 0 J𝜑K(𝑣 3 ) for 𝑣 3 (𝑝) = 1, 𝑣 3 (𝑞) =0
1 1 J𝜑K(𝑣 4 ) for 𝑣 4 (𝑝) = 1, 𝑣 4 (𝑞) =1

Table 3.2.: Truth table from semantics of a formula

the semantics for the corresponding valuation. This is how table 3.1 arises
from definition 3.6 by means of four calculations akin to that in example 3.7.
For concrete formulas, we will always use truth tables because they are very
concise. Table 3.3 sums up the semantics of all connectives in a truth table.

𝑝 𝑞 ⊥ 𝑝 ∧𝑞 𝑝 ∨𝑞 𝑝 →𝑞
0 0 0 0 0 1
0 1 0 0 1 1
1 0 0 0 1 0
1 1 0 1 1 1

Table 3.3.: Truth table for all propositional connectives

We can also give semantics and truth tables for derived connectives.
3.2. Truth Values 45

Example 3.8
Recall that the negation ¬𝜑 of 𝜑 is defined to be the formula 𝜑 → ⊥. If 𝜑
is a propositional variables, the truth of ¬𝑝 is:
𝑝 ⊥ 𝑝→⊥
0 0 1
1 0 0
We can generalise this table to the negation of an arbitrary formula 𝜑 by
means of an arithmetic expression:

J¬𝜑K(𝑣) = 1 − J𝜑K(𝑣) .

It follows from this that


J⊤K(𝑣) = 1 .
?
Can you give a purely arithmetic expression for J𝜑 ↔ 𝜓 K(𝑣)?

When devising the truth table for larger formulas, it helps to add (some) sub-
formulas to the table. The following example demonstrates this.

Example 3.9
Let 𝑝 and 𝑞 be variables, and let 𝜑 be the formula 𝑝 ∨ (𝑞 → 𝑝) → 𝑞 → 𝑝.
Its truth table is given as follows.
𝑝 𝑞 𝑞→𝑝 𝑝 ∨ (𝑞 → 𝑝) 𝜑
0 0 1 1 1
0 1 0 0 1
1 0 1 1 1
1 1 1 1 1
This table lists not only the semantics of 𝜑 but also of some subformulas,
which is not strictly necessary but greatly aids the calculation.

This last example shows a formula that uses three propositional variables.

Example 3.10
Let 𝜑 be (𝑟 ∧ 𝑢 → 𝑤) ∧ (𝑢 ∧ ¬𝑤) → ¬𝑟 , which appeared in example 1.4 as
formalisation of Aristotle’s deduction in one formula. We see that var(𝜑) =
{𝑟, 𝑢, 𝑤 } and thus a valuation for 𝜑 has to assign truth values to each of
46 3. Fitch-Style Deduction and Classical Truth

these.
𝑢 𝑤 𝑟 𝑟 ∧𝑢 → 𝑤 𝑢 ∧ ¬𝑤 ¬𝑟 𝜑
0 0 0 1 0 1 1
0 0 1 1 0 0 1
0 1 0 1 0 1 1
0 1 1 1 0 0 1
1 0 0 1 1 1 1
1 0 1 0 1 0 1
1 1 0 1 0 1 1
1 1 1 1 0 0 1

3.3. Satisfiability and Validity

“Making a truth table for a formula does not seem difficult, just a bit labori-
ous.”, Clara interjects, “But what is the use of making truth like in examples 3.9
and 3.10? These formulas are always true, which would mean that the formu-
las are somewhat trivial, doesn’t it?”.
Not at all! That these formulas are always true means that the relation be-
tween the propositional variables that they express is valid. Let us make this
a definition.

Definition 3.11
Let 𝜑 be a propositional formula. We say that 𝜑 is valid if J𝜑K(𝑣) = 1 for
all valuations 𝑣 and satisfiable if J𝜑K(𝑣) = 1 for some valuation 𝑣. A valid
formula is also called a tautology .

Given the truth table of a formula 𝜑, we can easily read off that 𝜑 is valid if all
the rows of the truth table for 𝜑 have a 1 and that 𝜑 is satisfiable if some row in
the table has a 1. For instance, ¬𝑝 is satisfiable but not a tautology, it is said to
be contingent, see example 3.8. However, the formulas 𝑝 ∨ (𝑞 → 𝑝) → 𝑞 → 𝑝
and (𝑟 ∧𝑢 → 𝑤) ∧ (𝑢 ∧¬𝑤) → ¬𝑟 from examples 3.9 and 3.10 are both valid.

3.4. Soundness and Consistency

Clara is a bit confused: “So you say that valid formulas express true relations
between propositional variables. This is clear, but I wonder now about the
3.5. Classical Logic and Completeness 47

natural deduction proofs. Those were also supposed to express valid relations.
Are those the same?”

Not exactly, but at least everything that is provable in ND is also valid. This is
called soundness. We do not have the tools to prove this, yet, but we can state
a simplified version.

Theorem 3.12: Soundness of ND for theorems


If 𝜑 is a theorem of ND, that is ⊢ 𝜑 is derivable, then 𝜑 is valid.

An important consequence is that natural deduction is consistent.

Corollary 3.13: Consistency of ND


There is no deduction for ⊢ ⊥ in ND.
Proof. Suppose that ⊢ ⊥ is derivable. By theorem 3.12, we would have
that J⊥K𝑣 = 1 for all valuations. This contradicts the definition of J⊥K𝑣 and
therefore ⊢ ⊥ cannot be derivable. □

3.5. Classical Logic and Completeness

“Great, this means that we can prove all valid propositions in ND!”, exclaims
Clara convinced. Not so fast! Theorem 3.12 says that you can prove only valid
formulas, but not necessarily all. In fact, this is not even true!

Theorem 3.14
The sequent ⊢ ¬¬𝑝 → 𝑝 is not provable in ND, but ¬¬𝑝 → 𝑝 is valid.

Proof. We have for all valuations 𝑣 that


( ) ( )
J¬¬𝑝 → 𝑝K𝑣 = (1 − (1 − J𝑝K𝑣 )) =⇒ J𝑝K𝑣 = J𝑝K𝑣 =⇒ J𝑝K𝑣 = 1

and thus ¬¬𝑝 → 𝑝 is valid. To show that ⊢ ¬¬𝑝 → 𝑝 is not provable in


ND, we would need to provide a different model and show that ND is also
sound for that model, while the formula would not be a tautology in that
model. We leave the details to appendix C. □

Theorem 3.14 tells us that there are valid formulas, which are not provable. Is
there a way to fix this? It turns out that it suffices to turn the formula ¬¬𝑝 → 𝑝
into a proof rule to obtain a complete deductive system.
48 3. Fitch-Style Deduction and Classical Truth

Definition 3.15
The system cND (classical natural deduction) is obtained by adding to the
system ND the following rule for proofs by contradiction.
Γ, ¬𝜑 ⊢ ⊥
(Contra)
Γ ⊢𝜑
The rule (Contra) formalises the commonly know proof principle, called “proof
by contradiction”, that proceeds by proving that the assumption that a prop-
erty 𝜑 does not hold is absurd and then concluding that 𝜑 must therefore
hold.
With this rule, we can prove in the system cND every valid proposition.

Theorem 3.16: Soundness and Completeness of cND


A sequent ⊢ 𝜑 is derivable in cND if and only if 𝜑 is valid.

Proof. Soundness is an extension of theorem 3.12 to include also the (Con-


tra)-rule. That this rule is sound follows from theorem 3.14. We are not yet
in the position to prove completeness, but it can be proved either by pro-
viding an algorithm that constructs a proof trees for valid formulas [Gal87,
sec. 3.4.7], or itself by a proof by contradiction! □

“Why is the proof-by-contradiction rule so special?”, Clara asks, “It seems like
a perfectly good proof principle to me.”
The problem with the (Contra)-rule is that it is very specific to the Boolean
interpretation. Let me show you what I mean.

Example 3.17
We can derive the law of excluded middle (LEM) in cND:

1 ¬(𝑝 ∨ ¬𝑝)
2 ¬𝑝 example 2.8, 1
3 ¬¬𝑝 symmetric example 2.8, 1
4 ⊥ ¬E, 2, 3
5 𝑝 ∨ ¬𝑝 Contra, 1–4

The LEM states that every formula has to be either true or false, which is ex-
actly what we have in the Boolean model, and that a third possibility is not
3.5. Classical Logic and Completeness 49

allowed. However, there are perfectly reasonable interpretations of propo-


sitional formulas, for which this is not the case. For instance, if the logic
has three values: a formula can be true, false or its status is unknown (see
appendix C). Not only excludes the proof-by-contradiction principle many
models, but it is not intuitive and difficult to use because it proves a posi-
tive statement from a negative one. This is a story of its own and started a
whole school of logic called “constructive logic”, which is also closely related
to an interpretation of proofs as processes or programs, the so-called Brouwer-
Heyting-Kolmogorov interpretation. I am afraid though that this topic will take
us too far.

Let us briefly look at the use of completeness.

Example 3.18
Under the Boolean semantics, one can define the implication 𝑝 → 𝑞 alter-
natively as ¬𝑝 ∨ 𝑞, as the following truth table shows.
𝑝 𝑞 ¬𝑝 ¬𝑝 ∨ 𝑞 𝑝 →𝑞
0 0 1 1 1
0 1 1 1 1
1 0 0 0 0
1 1 0 1 1
By theorem 3.16 there must be a proof of ⊢ (¬𝑝 ∨ 𝑞) ↔ (𝑝 → 𝑞) in cND.

Isaac is sceptical: “I understand the content of completeness and that it gives


a derivation for the formula in example 3.18, but how does it look like?”

Indeed, the completeness proof does not give you an explicit derivation that
could be independently checked, it does not give any independently verifiable
evidence.

“This is like claiming that someone committed a crime without providing any
evidence!”, Clara jokes.

True, although not having a derivation is perhaps less detrimental for the vic-
tim, unless natural deduction it is used in court. In any case, the only way
out is to actually find the proof. For propositional logic, the good news is that
there is an algorithm that can find the derivation, as is mentioned the proof
of theorem 3.16. In the case of other logics, we cannot always automatically
find derivations and it becomes important to find symbolic representations of
proofs that can be easily checked even though it may not be possible to find
them automatically.
50 3. Fitch-Style Deduction and Classical Truth

Clara: It is often claimed that many problems can be solved well by machines,
if we just feed them enough data to analyse. However, it seems that you are
saying that there are limits to this.

Peirce: Oh yes, thinking requires signs that represent knowledge, this is cen-
tral to our thinking!4 But let us leave the study of semiotics for another time,
it will get dark soon and I understand that you still have a long way to go.

Clara: Thank you very much for your time, Professor Peirce!

Peirce: I am no longer… Well, it was a pleasure to meet you! I wish you a safe
voyage.

Isaac and Clara leave the house. Once outside the view of the house, Clara
stops and turns to Isaac: “While I was sleeping, you were talking to Peirce.
How is it possible that he just accepted you?”

“Well, I have discovered how to use another of my units, the Shroud-O-Matic.


It allows me to project another appearance into the pupil of a person, which I
chose to be that of a person in the early 20th century.”, Isaac explains, seem-
ingly with a smile.

“Ah, that’s why you still appear ‘normal’ to me!”, Clara is impressed but starts
to get used to the technological surprises. “Did you learn enough to continue
deciphering your memory?”

“Yes, I was able to get more information from the memory. There was men-
tioning of ‘my heart’, but I do not know what that means. The remaining
memory is marked with First-Order Deduction, which I stopped.”

Clara raises her eyebrows, “A heart? How could you have a heart? However,
I have an idea where we can go to understand the remaining memory!”

By now, Clara knows the procedure. She closes her eyes and the floating feel-
ing sets in.

4 “To begin with, every concept and every thought beyond immediate perception is a
sign.” [Pei92, Pragmatism (1907)]. See also Marr [Mar95]
3.5. Classical Logic and Completeness 51

Exercises

Answers to the Quizzes


Answer to quiz on page 40 The first proof attempt is not correct because
the (→I)-rule uses a hypothesis but line 2 does not open a new flag. A correct
proof would look as follows.

1 𝑝
2 𝑞
3 𝑝 Assum, 1
4 𝑞→𝑝 →I, 2–3

The second attempt is correct and proves the sequent 𝑝 ⊢ 𝑟 → 𝑝 ∧ 𝑟 .

Answer to quiz on page 42 For each of the three variables, there are two
possibilities 0 and 1. Thus, there are 23 = 8 different combinations of truth
values and each of them gets its own row.

Answer to quiz on page 45 We must have that J𝜑 ↔ 𝜓 K(𝑣) = 1 if and only


if J𝜑K(𝑣) = J𝜓 K(𝑣), which is equivalent to saying that J𝜑K(𝑣) − J𝜓 K(𝑣) = 0.
Therefore, J𝜑 ↔ 𝜓 K(𝑣) = 1 − (J𝜑K(𝑣) − J𝜓 K(𝑣)).
Part II.

Semantics and Limits


Bibliography
[Ane12] Irving H. Anellis. “Peirce’s Truth-functional Analysis and the Ori-
gin of the Truth Table”. In: History and Philosophy of Logic 33.1
(2012), pp. 87–97. issn: 0144-5340. doi: 10 . 1080 / 01445340 .
2011.621702.
[Bat83] Penelope Hartshorne Batcheler. Historic Structure Report: Archi-
tectural Data Section, Charles S. Peirce House, Delaware Water
Gap National Recreation Area, Pennsylvania. Denver Service Cen-
ter, Mid-Atlantic/North Atlantic Team, Branch of Cultural Re-
sources, National Park Service, U. S. Dept. of the Interior, 1983,
p. 189. uRl: https : / / www . cspeirce . com / faqs / menu /
library/aboutcsp/batcheler/Arisbe_NPS_Historic_
Structure_Report_1983.pdf.
[Bre98] Joseph Brent. Charles Sanders Peirce: A Life. Vol. Rev. and enl. ed.
Bloomington, Ind: Indiana University Press, 1998. isbn: 978-0-253-
21161-3.
[BP21] Robert Burch and Kelly A. Parker. “Charles Sanders Peirce”. In: The
Stanford Encyclopedia of Philosophy. Ed. by Edward N. Zalta and
Uri Nodelman. Summer 2022. Metaphysics Research Lab, Stan-
ford University, 2021. uRl: https://plato.stanford.edu/
archives/sum2022/entries/peirce/.
[Fit52] F. Fitch. Symbolic Logic, An Introduction. The Ronald Press Com-
pany, 1952.
[Gal87] Jean H. Gallier. Logic for Computer Science: Foundations of Auto-
matic Theorem Proving. Wiley, 1987. 511 pp. isbn: ISBN 978-0-471-
61546-0.
[Gan04] Jonardon Ganeri. “Indian Logic”. In: Handbook of the History of
Logic. Ed. by Dov M. Gabbay and John Woods. Vol. 1. North-
Holland, 2004, pp. 309–395. isbn: 1874-5857. doi: 10 . 1016 /
S1874-5857(04)80007-4.
56 Bibliography

[Gen35] Gerhard Gentzen. “Untersuchungen Über Das Logische Schließen.


I”. In: Mathematische Zeitschrift 39.1 (1935), pp. 176–210. issn:
1432-1823. doi: 10.1007/BF01201353.
[Ken02] Hubert Kennedy. Peano: Life and Works of Giuseppe Peano. Defini-
tive Edition. San Francisco: Peremptory Publications, 2002.
[Kle74] Stephen C. Kleene. Introduction to Metamathematics. 7. repr. Bib-
liotheca Mathematica. Groningen: Wolters-Noordhoff and North-
Holland, 1974. isbn: 978-0-444-10088-7.
[Mar95] David Marr. “Signs of C. S. Peirce”. In: American Literary History
7.4 (1995), pp. 681–699. issn: 0896-7148. JSTOR: 490069. uRl:
https://www.jstor.org/stable/490069.
[NG14] Rob Nederpelt and Professor Herman Geuvers. Type Theory and
Formal Proof: An Introduction. 1st ed. New York, NY, USA: Cam-
bridge University Press, 2014. isbn: 1-107-03650-X.
[Pea16] Guiseppe Peano. “Sul Principio d’identità”. In: Mathesis, Società
Italiana di Matematica Bollettino 8 (1916), p. 40.
[Pei92] Peirce Edition Project, ed. The Essential Peirce, Volume 2: Selected
Philosophical Writings (1893-1913). Vol. 2. Indiana University Press,
1992.
[Pei98] Peirce Edition Project. “The First Rule of Logic (1898)”. In: The Es-
sential Peirce. Vol. 1. United States: Indiana University Press / the
Peirce Edition Project, 1998, pp. 42–56. isbn: 978-0-253-21190-3.
[PD01] Frank Pfenning and Rowan Davies. “A Judgmental Reconstruction
of Modal Logic”. In: Math. Struct. Comput. Sci. 11.4 (2001), pp. 511–
540. doi: 10.1017/S0960129501003322.
[Pra06] Dag Prawitz. Natural Deduction: A Proof-Theoretical Study. Dover
Books on Mathematics. Dover Publications, 2006. isbn: 978-0-486-
44655-4.
[PTW18] Graham Priest, Koji Tanaka, and Zach Weber. “Paraconsistent
Logic”. In: The Stanford Encyclopedia of Philosophy. Ed. by Ed-
ward N. Zalta. Summer 2018. Metaphysics Research Lab, Stanford
University, 2018. uRl: https : / / plato . stanford . edu /
archives/sum2018/entries/logic-paraconsistent/.
[RL08] Clara Silvia Roreo and Erika Luciano. Giuseppe Peano: Matem-
atico e Maestro. Università di Torino e NerosuBianco: Diparti-
mento di Matematica, 2008. isbn: 978-88-900876-6-0. uRl: https:
//iris.unito.it/handle/2318/52823.
Bibliography 57

[Sol13] Daniel Solow. How to Read and Do Proofs: An Introduction to Math-


ematical Thought Processes. 6th edition. Hoboken, New Jersey: Wi-
ley, 2013. 336 pp. isbn: 978-1-118-16402-0.
A. Greek Letters
Letter Capitalised Name Command Command (capitalised)
𝛼 𝐴 alpha \alpha A
𝛽 𝐵 beta \beta B
𝛾 Γ gamma \gamma \Gamma
𝛿 Δ delta \delta \Delta
𝜀 𝐸 epsilon \epsilon E
𝜁 𝑍 zeta \zeta Z
𝜗 Θ theta \theta \Theta
𝜄 𝐼 iota \iota I
𝜘 𝐾 kappa \kappa K
𝜆 Λ lambda \lambda \Lambda
𝜇 𝑀 mu \mu M
𝜈 𝑁 nu \nu N
𝜉 Ξ xi \xi \Xi
𝑜 𝑂 omicron \omicron O
𝜋 Π pi \pi \Pi
𝜚 P rho \rho \Rho
𝜎 Σ sigma \sigma \Sigma
𝜏 𝑇 tau \tau T
𝜐 Υ upsilon \upsilon \Upsilon
𝜑 Φ phi \varphi \Phi
𝜒 𝑋 chi \chi X
𝜓 Ψ psi \psi \Psi
𝜔 Ω omega \omega \Omega

Table A.1.: Greek Letters and Their Pronunciation

You may find all Greek letters with their respective pronunciation and the
LATEX-macro to typeset them in table A.1. The capital Greek letters P and the
small letter 𝑜 require being set up with the mathspec package or the following
two declarations.

\DeclareMathSymbol{\Rho}{\mathalpha}{operators}{"50}
60 A. Greek Letters

\DeclareMathSymbol{\omicron}{\mathord}{letters}{"6F}

More information on this can be found in the symbols-a4 documentation.


Moreover, by LATEX uses different versions of some small Greek letters. To
obtain the result in table A.1, one has to add the following to the document.

\renewcommand*{\phi}{\varphi}
\renewcommand*{\epsilon}{\varepsilon}
\renewcommand*{\theta}{\vartheta}
\renewcommand*{\kappa}{\varkappa}
\renewcommand*{\rho}{\varrho}

The reason for using these is that 𝜑, 𝜀, 𝜗, 𝜘 and 𝜚 are easier to distinguish, to
write or are just more common then their respective standard LATEX-versions
𝜙, 𝜖, 𝜃 , 𝜅 and 𝜌. For instance, 𝜅, 𝜌 and 𝜃 are difficult to distinguish from 𝑘, 𝑝
and Θ. Phi written as 𝜑 can be written in one movement, while 𝜙 needs two.
Epsilon is common in both forms, but 𝜀 is easier to write than 𝜖 because it also
just needs one movement, and 𝜖 can be easily confused with the membership
relation ∈ and the Euro symbol €.
B. Tools

B.1. Sets and Maps


Given sets 𝐴 and 𝐵, we denote by 𝐴 × 𝐵 their product consisting of ordered
pairs
𝐴 × 𝐵 = {(𝑎, 𝑏) | 𝑎 ∈ 𝐴, 𝑏 ∈ 𝐵}

and 𝐵𝐴 the set of maps 𝐴 → 𝐵. Given two maps 𝑓 : 𝐴 → 𝐵 and 𝑔 : 𝐵 → 𝐶,


we denote by 𝑔 ◦ 𝑓 the composition of 𝑔 and 𝑓 with (𝑔 ◦ 𝑓 )(𝑥) = 𝑔(𝑓 (𝑥)).
The composition can be pronounced as “𝑔 composed with 𝑓 ” or “𝑔 after 𝑓 ”.
We will also be using the powerset of sets 𝐴, denoted by P (𝐴), which is the
set of all subsets of 𝐴:
P (𝐴) = {𝑈 | 𝑈 ⊆ 𝐴}
The empty set is denoted by ∅.
The set of natural numbers starting at 0 is denoted by N. Given a natural
number 𝑛 ∈ N, we write [𝑛] for the set of the first 𝑛 natural numbers :1

[𝑛] = {𝑘 ∈ N | 𝑘 < 𝑛} .

In particular, we have [0] = ∅, [1] = {0} etc. Generally, [𝑛] has exactly 𝑛
elements.

B.2. Induction on Natural Numbers


We denote by N the set of natural numbers starting at 0. But what is this
set exactly? Naively, one could define it by saying N = {0, 1, 2, . . .}. However,
making precise the meaning of the dots is quite difficult. A better way is to
define the natural numbers as some set with a certain property, as follows.
1 Warning: In some contexts, [𝑛] includes also 𝑛 and has thus 𝑛 + 1 elements. For our purposes,
the present interpretation is the most useful though.
62 B. Tools

Definition B.1: Natural numbers and iteration principle


The natural numbers are a set N with an element 0 ∈ N and a map suc : N →
N, such that for any set 𝐴 with an element 𝑎 0 and a map 𝑓 : 𝐴 → 𝐴, there
is a unique map 𝑔 : N → 𝐴 such that 𝑔(0) = 𝑎 0 and 𝑔(suc(𝑛)) = 𝑓 (𝑔(𝑛)) for
all 𝑛 ∈ N. We say that 𝑔 is defined by iteration of (𝑎 0, 𝑓 ).

To make our life simple, we will write 𝑛 + 1 instead of suc(𝑛) for the successor
of the number 𝑛. We can the express any natural number by 1 = 0+1, 2 = 1+1,
3 = 2 + 1 and so forth. When we wrote N = {0, 1, 2, . . .} above, we intuitively
understood that N should consist of exactly the numbers expressed in this way
and that there should be no spurious elements like catfish ∈ N. That this is
the case is expressed by the iteration property because it tells us that every
element in N is either of the form 0 or 𝑛 + 1 for some 𝑛 ∈ N. This principle
allows us to count from any number down to 0. One can say that

elements of N are static representations of numbers, while iteration


reflect the dynamics of counting.

If we consider a pair (𝑎 0, 𝑓 ) given as in definition B.1, then one can compare


this to the following imperative program, in which a for-loop is used to re-
peatedly apply 𝑓 to the initial value 𝑎 0 .
1 def g(n):
2 res = 𝑎 0
3 for i in [1, ..., n]:
4 res = f(res)
5 return res

Using the iteration principle can be a nuisance because we have to explicitly


specify the pair (𝑎 0, 𝑓 ), which only gets worse for the more complicated struc-
tures that we encounter in, for example, ⁇. For instance, suppose that we wish
to define exponentiation in terms of multiplication. Our intuition would be to
proceed with the following definition.

𝑎0 = 1 and 𝑎𝑛+1 = 𝑎𝑛 · 𝑎 (B.1)

This definition arises by applying the iteration principle to the pair (1, 𝑓𝑎 ) with
𝑓𝑎 (𝑥) = 𝑥 · 𝑎. However, defining exponentiation via the formal iteration prin-
ciple is clearly inconvenient, and we will use the equational style in eq. (B.1)
whenever possible and we are sure that the equations can be reduced to the iter-
ation principle. Such a reduction is not always easy. For instance, the factorial
B.2. Induction on Natural Numbers 63

function can be expressed by the following equations.

0! = 1 and (𝑛 + 1)! = 𝑛! · (𝑛 + 1) (B.2)

If we tried to use the iteration principle directly to define ! as map N → N via


iteration, then we run into the problem that 𝑛 + 1 is not available for the mul-
tiplication. Instead, we have to define first a map fac : N → N × N by iteration
from (1, 1) and 𝑓 (𝑘, 𝑟 ) = (𝑘 + 1, 𝑟 · 𝑘). We then find that fac(𝑛) = (𝑛 + 1, 𝑛!).
This shows that the factorial function can be obtained by iteration but this
requires considerable overhead. Therefore, we typically prefer specifications
like in eq. (B.2).

The iteration principle the natural numbers can be used to define most maps
on the natural numbers that we may be interested in.2 From the iteration
principle, we can also derive the usual induction principle.3

Theorem B.2: Induction principle for N


Let 𝑃 be a property of N, where we write 𝑃 (𝑛) for the property 𝑃 at 𝑛 ∈ N.
To prove that 𝑃 (𝑛) holds for all 𝑛 ∈ N, it suffices to show that
• 𝑃 (0) holds (base case), and

• for all 𝑛 ∈ N, assuming that 𝑃 (𝑛) holds, that 𝑃 (𝑛 + 1) holds


(induction step).
Proof. The proof of the induction principle needs a concrete definition of
what a property is. For simplicity, we assume that a property is a subset,
that is, 𝑃 ⊆ N where 𝑃 (𝑛) means that 𝑛 ∈ 𝑃. Our goal is then to show that
all natural numbers are contained in 𝑃. In turn, we obtain this by iterating
(0, suc) to get a map 𝑔 : N → 𝑃 because 0 is in 𝑃 and for any 𝑛 ∈ 𝑃, also
𝑛 + 1 ∈ 𝑃. Since 𝑔 is uniquely defined by 𝑔(0) = 0 and 𝑔(𝑛 + 1) = 𝑔(𝑛) + 1,
we obtain that N ⊆ 𝑃, as required. □

The assumption 𝑃 (𝑛) in the induction step is called the induction hypothe-
sis, and it should be noted that it is only assumed for each individual 𝑛. Some-
times, the induction principle is mistakenly stated with the induction hypoth-
esis separated as follows.

2 Defining exact set of definable maps requires one to set up a formal theory, like primitive
recursion, Peano arithmetic or Gödel’s system T.
3 We do not specify formally at this point what a property is, but we can think of subsets 𝑃 ⊆ N.
64 B. Tools

Incorrect “induction principle”: To prove that 𝑃 (𝑛) holds for all


𝑛 ∈ N,
1. prove that 𝑃 (0) holds,
2. assume that 𝑃 (𝑛) holds for all 𝑛 ∈ N, and
3. prove that 𝑃 (𝑛 + 1) holds for all 𝑛 ∈ N.

This, however, is incorrect because the second point implies immediately the
first and third, thereby allowing us to prove that any property! Clearly, we do
not want this and theorem B.2 is the correct principle. In ⁇, we will see how
first order logic allows us to resolve such ambiguities of natural language.

B.3. Trees and Induction

There are various kinds of trees: binary trees, which have two children below
every node; lists, which have just one child at any node; trees with arbitrary
branching, where every node may have an arbitrary number of children. Be-
sides the branching, trees usually have labels. For instance, in a list every node
is labelled with the corresponding list element. The aim of this section is to
give a general account of labelled trees.

We begin by characterising the labels and branching a tree may have.

Definition B.3
We call a pair (𝐴, 𝐵) a branching type if 𝐴 is a set and 𝐵 an 𝐴-indexed
family of sets, that is, for every 𝑎 ∈ 𝐴 we are given a set 𝐵𝑎 .

Note that the terminology of “branching type” also tacitly includes labels. The
intuition of this definition is that a tree of type (𝐴, 𝐵) will be labelled in 𝐴 and
will have at a node with label 𝑎 ∈ 𝐴 one branch for every element in 𝐵𝑎 .

Example B.4
Binary trees (not balanced!) labelled in N are induced by the branching
type (N ∪ {∗}, 𝐵) with 𝐵𝑛 = [2] and 𝐵 ∗ = ∅. The idea is that an inner node
can be labelled with a number, while a leaf is labelled with ∗. An inner node
has then exactly two children and a leaf has none.
B.3. Trees and Induction 65

2
0 1

3 0
0 1 0 1

∗ 10 ∗ ∗
0 1

∗ ∗

Figure B.1.: Example of a binary tree labelled in N. The root is labelled with the
number 2 and has two children. The circled blue numbers indicate
the number of the outgoing branching.

Figure B.1 shows an example of a binary tree, which we wish to capture with
the branching type given in example B.4. The following definition shows how
general trees can be constructed and what their defining property is.

Definition B.5: Trees as inductive structures


Trees with branching type (𝐴, 𝐵) are given by a set T , or T (𝐴, 𝐵), together
with a family tr of maps tr𝑎 : T 𝐵𝑎 → T indexed by elements 𝑎 ∈ 𝐴, such
that the following iteration principle is fulfilled: for any set 𝑋 and family 𝑓
of maps with 𝑓𝑎 : 𝑋 𝐵𝑎 → 𝑋 , there is a unique map 𝑓¯ : T → 𝑋 with

𝑓¯(tr𝑎 (𝛼)) = 𝑓𝑎 (𝑔 ◦ 𝛼)

for all 𝑎 ∈ 𝐴 and 𝛼 : 𝐵𝑎 → T . We say that 𝑓¯ is defined by iteration from 𝑓


or that 𝑓¯ is the inductive extension of 𝑓 .

This definition packs a lot. Let us unfold it in the case of binary trees.

Example B.6: Binary trees


Let us write B = T (N ∪ {∗}, 𝐵) for a set of trees for the branching type
that we introduced in example B.4. This set comes with a family tr of maps
indexed by N ∪ {∗}, that is, we get for every 𝑛 ∈ N a map tr𝑛 : B [2] → B
and one map tr∗ : B ∅ → B. From the exercises in appendix B.1, we know
that giving a map 𝛼 in B [2] amounts to giving a pair of elements in B. Thus,
we can represent for such an 𝛼 the resulting tree 𝑡𝑟𝑛 (𝛼) like so, where a box
indicates a whole subtree:
66 B. Tools

𝑛
0 1

𝛼 (0) 𝛼 (1)

Let us denote by ℓ the tree tr∗ (𝜀), where 𝜀 is the only element of B ∅ , see ⁇.
This tree represents a leaf in a binary tree. The tree from fig. B.1 can then
be represented in B by

𝑡 = tr2 (tr3 (ℓ, tr10 (ℓ, ℓ)), tr0 (ℓ, ℓ))

where we use the pair notation to create elements of B [2] .


So much for the construction of tree. The interesting part is what we can
do with them though. This is where the iteration principle comes in, which
allows us to traverse a tree. For instance, we can sum up all the labels in a
tree by using the family 𝑠 given by

𝑠𝑛 : N [2] → N 𝑠∗ : N∅ → N
𝑠𝑛 (𝛼) = 𝑛 + 𝛼 (0) + 𝛼 (1) 𝑠 ∗ (𝛼) = 0

this gives us a map 𝑠¯ : B → N. Running this map on the tree 𝑡 from above,
we have get the following.

𝑠¯(𝑡) = 𝑠 2 (¯𝑠 (tr3 (ℓ, tr10 (ℓ, ℓ))), 𝑠¯(tr0 (ℓ, ℓ)))
= 2 + 𝑠¯(tr3 (ℓ, tr10 (ℓ, ℓ))) + 𝑠¯(tr0 (ℓ, ℓ))
= 2 + (3 + 𝑠¯(ℓ) + 𝑠¯(tr10 (ℓ, ℓ))) + (0 + 𝑠¯(ℓ) + 𝑠¯(ℓ))
= 2 + (3 + 0 + (10 + 𝑠¯(ℓ) + 𝑠¯(ℓ))) + (0 + 0 + 0)
= 2 + (3 + 0 + (10 + 0 + 0)) + 0
= 2 + (3 + 0 + 10) + 0
= 2 + 13 + 0
= 15

Thus, we have traversed the trees depth-first and summed up all the inter-
mediate results.

So far, have used the iteration principle only to define maps but not to prove
anything. Just like for the natural numbers, we can also obtain an induction
principle.
B.4. Formal Languages 67

Theorem B.7: Tree Induction


Let T be a set of tree with branching type (𝐴, 𝐵) and 𝑃 a property of T ,
that is 𝑃 ⊆ T . If for all 𝑎 ∈ 𝐴 and 𝛼 : 𝐵𝑎 → 𝑃 we have tr𝑎 (𝛼) ∈ 𝑃, then 𝑃
holds for all trees in T (i.e., 𝑃 = T ).

B.4. Formal Languages

Recall that 𝐴∗ denotes the set of words over an alphabet 𝐴. Concretely, the set
of words is given by

𝐴∗ = {𝜀} ∪ {𝑎 0𝑎 1 · · · 𝑎𝑛 | 𝑛 ∈ N, 𝑎𝑘 ∈ 𝐴},

where 𝜀 is the empty word. For instance, if 𝐴 = {𝑎, 𝑏}, then 𝐴∗ contains the
singleton words 𝑎 and 𝑏, and longer words like 𝑎𝑏𝑏𝑎𝑎. The set of languages
over 𝐴 is the powerset P (𝐴∗ ), that is, the set of all subsets of 𝐴∗ . Given two
words 𝑣, 𝑤 ∈ 𝐴∗ , we denote by 𝑣𝑤 or 𝑣 + 𝑤 the concatenation of the words
𝑣 and 𝑤, that is, considering 𝑣 and 𝑤 as one word by reading their letters in
order.
(𝑣 0 · · · 𝑣𝑛 ) + (𝑤 0 · · · 𝑤𝑚 ) = 𝑣 0 · · · 𝑣𝑛𝑤 0 · · · 𝑤𝑚

We will often make use of context-free grammars, which generate lan-


guages. These grammars will be notated in so-called Backus-Naur form.
Let us start with an example.

Example B.8
Suppose we want to define a language of arithmetic expressions, in which
one can use numbers in N, addition and negation. In this case, we would
say that such expressions 𝑒 are generated by the following grammar.

𝑒 ::= 𝑛 , 𝑛 ∈ N | 𝑒 + 𝑒 | −𝑒 | (𝑒) (B.3)

This grammar can be read as follows. The symbol ::= says that an expres-
sion 𝑒 can be generated by using any of the four options on the right-hand
side, where the options are separated by the vertical bar. The first option is
that 𝑒 can be any natural number 𝑛, which is the only case where we can
start to generate an expression. To generate larger expressions, we have
to use any of the other two options. For instance, if we have generated
68 B. Tools

already expressions 𝑒 1 and 𝑒 2 , then the second option allows us to gener-


ate the expression “𝑒 1 + 𝑒 2 ”, the third option gives us “−𝑒 1 ” and the fourth
introduces parentheses “(𝑒 1 )”. It is important to realise that “+” and “−”
have no meaning, they are just syntax. In the terminology of (context-free)
grammars, “(”, “)”, “+”, “−” and “𝑛” are called terminal symbols, while 𝑒
in the grammar is called a non-terminal symbol.
We can now define the language generated by the grammar, call it 𝐿, as a
subset of all words over the alphabet 𝐴 = N ∪ {+, −, (, )} by

𝐸 = {𝑒 ∈ 𝐴∗ | generated by eq. (B.3)} .

But what does “generated by” mean exactly? We can think of eq. (B.3) as
a way of describing trees of a certain shape. For instance, the expression
5 + (−3) can be seen as a linear, textual description of the following tree.
+

5 ()

In appendix B.3, we have already seen how to describe such trees. The
labels are numbers and the operators, that is, we put

𝐿 = N ∪ {+, −, ()}

and the branching width 𝐵 is given by

𝐵𝑛 = ∅ 𝐵 + = [2] 𝐵 − = [1] 𝐵 () = [1] ,

which indicates that the numbers are leaves, “+” has two children, and “−”
and “()” have one. An expression can be seen as a tree of branching type
(𝐿, 𝐵). To get a language, we define a map flat : T (𝐿, 𝐵) → 𝐴∗ by iteration
B.4. Formal Languages 69

of the family {𝑓𝑥 }𝑥 ∈𝐿 : (𝐴∗ ) 𝐵𝑥 → 𝐴∗ defined by

𝑓𝑛 (𝛼) = 𝑛
𝑓+ (𝛼) = 𝛼 (0) + ” + ” + 𝛼 (1)
𝑓+ (𝛼) = ” − ” + 𝛼 (0)
𝑓 () (𝛼) = ”(” + 𝛼 (0) + ”)”

This gives us that expressions are given as the image of the map flat:

𝐸 = {flat(𝑡) | 𝑡 ∈ T (𝐿, 𝐵)}

There is something peculiar about trees compared to expressions: The latter


need parentheses to disambiguate, as we do not know how to generate the
word 5 + 4 + 1 with our grammar and there are two different trees that flat
maps to this word:
+ +

5 + + 1

4 1 5 4

To resolve this ambiguity, we normally denote these expressions by, re-


spectively, 5 + (4 + 1) and (5 + 4) + 1. This tells us that trees do not need the
parentheses and all ambiguity is removed. In fact, we can see this already
in the branching type, in which the parentheses do not add any branching
and merely reflect the parentheses in the grammar. Often parentheses can
be left out by constructing a grammar more cleverly than what we did, but
we leave that for our specific uses of grammars.

Definition B.9
Let 𝐴 be an alphabet (a set). A context-free grammar 𝐺 over 𝐴 is a tuple
(𝑉 , 𝑅) where 𝑉 is a finite set of non-terminal symbols and 𝑅 ⊆ 𝑉 × (𝐴 ∪ 𝑉 )
is a relation, the production rules of 𝐺.
70 B. Tools

Example B.10
Taking 𝑉 = {𝑒} and

𝑅 = {(𝑒, 𝑛) | 𝑛 ∈ N} ∪ {(𝑒, 𝑒 + 𝑒)} ∪ {(𝑒, −𝑒)} ∪ {(𝑒, (𝑒))}

is the grammar given in example B.8.


C. Three-Valued Logic
In what follows, we describe the so-called 3-valued Heyting logic or algebra.
Let T be the set {0, 𝑋, 1}. Intuitively, we understand 0 and 1 as true and false,
as in chapter 3, while the third element 𝑋 of T should be seen as an unknown
truth value. This can occur, for example, in computer when the voltage of
a logical signal is not high enough or fluctuates and thereby creates an un-
defined logic state. We will see that T can be used a domain for modelling
propositional logic. First of all, we define an order ⊑ on T by

0⊑𝑋, 𝑋 ⊑ 1, 0 ⊑ 1, 0 ⊑ 0, 𝑋 ⊑𝑋 and 1 ⊑ 1.

This order allows us to use min and max as usual, and if we use them to inter-
pret conjunction and disjunction, then we will see that they conform to our
expectation of an unknown value as input to logic gates:

min(𝑎, 𝑏) (interpretation of ∧) and max(𝑎, 𝑏) (interpretation of ∨)


𝑏 0 X 1 𝑏 0 X 1
𝑎 𝑎
0 0 0 0 0 0 X 1
X 0 X X X X X 1
1 0 X 1 1 1 1 1

We can also define a semantic implication =⇒T on T just as we did for the
Boolean semantics: {
1, 𝑎 ⊑ 𝑏
𝑎 =⇒T 𝑏 =
𝑏, otherwise
The following table lists all the possibilities for =⇒𝑇 𝑟𝑖 and the resulting nega-
tion:

Tables of 𝑎 =⇒T 𝑏 and negation of 𝑎


𝑎 𝑏 0 X 1 a 𝑎 =⇒T 0
0 1 1 1 0 1
X 0 1 1 X 0
1 0 X 1 1 0
72 C. Three-Valued Logic

Putting this all together, we can define for valuations 𝑣 : PVar → T a map
J−KT𝑣 : PForm → T analogously to definition 3.6. We can use this map to also
gives us an entailment relation ⊨T by defining

Γ ⊨T 𝜑 if for all v we have JΓKT𝑣 ≤ J𝜑KT𝑣 .

Also similarly to the Boolean model (theorem 3.12), one can prove the follow-
ing soundness result.

Theorem C.1
If Γ ⊢ 𝜑 is derivable in ND, then Γ ⊨T 𝜑.
However, the similarity with the Boolean model stops when we move to clas-
sical logic. Indeed, it is easy to see that ⊭T 𝑝 ∨ ¬𝑝. Let 𝑣 be the valuation that
is equal to 𝑋 everywhere. We then have

J𝑝 ∨ ¬𝑝K𝑣 = max{J𝑝K𝑣 , J¬𝑝K𝑣 } = max{𝑋, 0} = 𝑋 ≠ 1 .

This shows that ¬ ⊨T 𝑝 ∨ ¬𝑝 and therefore the (Contra)-rule from defini-


tion 3.15 cannot be sound for this three-valued model.
There are other possibilities for interpreting the implication, see Łukasiewicz’s
or Kleene’s three-valued logic [Kle74, § 64], but different proof systems than
ND and cND are needed to handle those interpretations.
D. Logic Programming
1 % :- table path(_,_,lattice(shortest/3))
2 % :- table conn/2
3
4 % Partial order of lists by length; used in tabled execution
5 shortest(P1, P2, P) :-
6 length(P1, L1),
7 length(P2, L2),
8 (L1 < L2 -> P = P1; P = P2).
9
10 % Right
11 adjacent(pos(X1,Y1), pos(X2, Y1)) :- succ(X1, X2), X1 < 6.
12 % Down
13 adjacent(pos(X1,Y1), pos(X1, Y2)) :- succ(Y1, Y2), Y1 < 4.
14 % Left
15 adjacent(pos(X1,Y1), pos(X2, Y1)) :- succ(X2, X1).
16 % Up
17 adjacent(pos(X1,Y1), pos(X1, Y2)) :- succ(Y2, Y1).
18
19 % Can we go from U to V?
20 step(U, V) :- adjacent(U, V), free(V).
21
22 % conn(U, V) holds if two positions U and V connected.
23 conn(U, U).
24 conn(U, V) :-
25 conn(W, V),
26 step(U, W).
27
28 % Can our robot reach the goal?
29 connr :- robot(U), goal(V), conn(U, V).
30
31 % path(U, V, P) holds if P is a path from U to V. A path is here a list of
positions.
32 path(U, U, [U]).
33 path(U, V, [U|P]) :-
34 path(W, V, P),
35 step(U, W).
36
37 % A path P with route(P) leads our robot from the initial position to the
goal.
38 route(P) :- robot(U), goal(V), path(U, V, P).
39
74 D. Logic Programming

40 % Initial position of robot.


41 robot(pos(2, 3)).
42
43 % Position of goal.
44 goal(pos(5,1)).
45
46 % All the positions that do not contain an obstacle.
47 free(pos(1,1)).
48 free(pos(1,4)).
49
50 free(pos(2,2)).
51 free(pos(2,3)).
52 free(pos(2,4)).
53
54 free(pos(3,1)).
55 free(pos(3,2)).
56 free(pos(3,4)).
57
58 free(pos(4,1)).
59 free(pos(4,2)).
60 free(pos(4,3)).
61 free(pos(4,4)).
62
63 free(pos(5,1)).
64 free(pos(5,3)).
65
66 free(pos(6,1)).
67 free(pos(6,2)).
68 free(pos(6,3)).
Index of Terminology and
Notation

General Notation

𝑔◦𝑓 composition of maps, read “𝑔 after 𝑓 ” See p. 21


∅ empty set, contains no elements See p. 21
[𝑛] set of first 𝑛 natural numbers See p. 21
N set of natural numbers See p. 21
P (𝐴) powerset of 𝐴 See p. 21
𝐴×𝐵 product of the sets 𝐴 and 𝐵 See p. 21
𝐵𝐴 set of maps from 𝐴 to 𝐵 See p. 21

Logic Syntax

Atomic formula the smallest building block of formu- See p. 11


las that have no logical connectives in
them; e.g., propositional variables in
the case of propositional logic
Direct subformula the formulas directly under the top- See p. 11
level connective of a formula
Parse tree tree-shaped representation of a formu- See p. 11
las, which can serve as basis for iter-
ation/induction and for the implemen-
tation on computers
PForm propositional formulas See p. 8
⊥ syntactic absurdity (“falsity”) See p. 9
↔ syntactic bi-implication (“if and only See p. 9
if” or “if”)
∧ syntactic conjunction (“and”) See p. 9
76 Operations on Formulas

∨ syntactic disjunction (“to”) See p. 9


→ syntactic implication (“implies”) See p. 9
¬ syntactic negation (“not”) See p. 9
⊤ syntactic truth (“true” or “top”) See p. 9
Top-level connective the connective at the root of the parse See p. 11
tree of a formula

Operations on Formulas

Sub(𝜑) the formulas that appear anywhere is a See p. 11


given formula 𝜑

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy