0% found this document useful (0 votes)
17 views692 pages

Open Logic Complete

Uploaded by

f.casarotto
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views692 pages

Open Logic Complete

Uploaded by

f.casarotto
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 692

THE OPEN LOGIC TEXT

Complete Build

Open Logic Project

Revision: (None) ((None))


(None)

The Open Logic Text by


the Open Logic Project
is licensed under a Cre-
ative Commons Attribu-
tion 4.0 International Li-
cense.
About the Open Logic Project

The Open Logic Text is an open-source, collaborative textbook of formal meta-


logic and formal methods, starting at an intermediate level (i.e., after an intro-
ductory formal logic course). Though aimed at a non-mathematical audience
(in particular, students of philosophy and computer science), it is rigorous.
The Open Logic Text is a collaborative project and is under active develop-
ment. Coverage of some topics currently included may not yet be complete,
and many sections still require substantial revision. We plan to expand the
text to cover more topics in the future. We also plan to add features to the text,
such as a glossary, a list of further reading, historical notes, pictures, better
explanations, sections explaining the relevance of results to philosophy, com-
puter science, and mathematics, and more problems and examples. If you
find an error, or have a suggestion, please let the project team know.
The project operates in the spirit of open source. Not only is the text freely
available, we provide the LaTeX source under the Creative Commons Attri-
bution license, which gives anyone the right to download, use, modify, re-
arrange, convert, and re-distribute our work, as long as they give appropriate
credit.
Please see the Open Logic Project website at openlogicproject.org for addi-
tional information.

1
Contents

2
CONTENTS

This file loads all content included in the Open Logic Project. Editorial
notes like this, if displayed, indicate that the file was compiled without
any thought to how this material will be presented. If you can read this,
it is probably not advisable to teach or study from this PDF.
The Open Logic Project provides many mechanisms by which a text
can be generate which is more appropriate for teaching or self-study. For
instance, by default, the text will make all logical operators primitives and
carry out all cases for all operators in proofs. But it is much better to leave
some of these cases as exercises. The Open Logic Project is also a work
in progress. In an effort to stimulate collaboration and improvemenent,
material is included even if it is ony in draft form, is missing exercises,
etc. A PDF produced for a course will exclude these sections.
To find PDFs more suitable for teaching and studying, have a look at
the sample courses available on the OLP website. To make your own,
you might start from the sample driver file or look at the sources of the
derived textbooks for more fancy and advanced examples.

Release: (None) ((None)) 3


Part I

Sets, Relations, Functions

4
CONTENTS

The material in this part is a reasonably complete introduction to ba-


sic naive set theory. Unless students can be assumed to have this back-
ground, it’s probably advisable to start a course with a review of this
material, at least the part on sets, functions, and relations. This should
ensure that all students have the basic facility with mathematical nota-
tion required for any of the other logical sections. NB: This part does not
cover induction directly.
The presentation here would benefit from additional examples, espe-
cially, “real life” examples of relations of interest to the audience.
It is planned to expand this part to cover naive set theory more exten-
sively.

Release: (None) ((None)) 5


Chapter 1

Sets

1.1 Basics

Sets are the most fundamental building blocks of mathematical objects. In fact,
almost every mathematical object can be seen as a set of some kind. In logic,
as in other parts of mathematics, sets and set-theoretical talk is ubiquitous.
So it will be important to discuss what sets are, and introduce the notations
necessary to talk about sets and operations on sets in a standard way.

Definition 1.1 (Set). A set is a collection of objects, considered independently


of the way it is specified, of the order of the objects in the set, or of their
multiplicity. The objects making up the set are called elements or members of
the set. If a is an element of a set X, we write a ∈ X (otherwise, a ∈
/ X). The set
which has no elements is called the empty set and denoted by the symbol ∅.

Example 1.2. Whenever you have a bunch of objects, you can collect them
together in a set. The set of Richard’s siblings, for instance, is a set that con-
tains one person, and we could write it as S = {Ruth}. In general, when
we have some objects a1 , . . . , an , then the set consisting of exactly those ob-
jects is written { a1 , . . . , an }. Frequently we’ll specify a set by some property
that its elements share—as we just did, for instance, by specifying S as the
set of Richard’s siblings. We’ll use the following shorthand notation for that:
{ x : . . . x . . .}, where the . . . x . . . stands for the property that x has to have in
order to be counted among the elements of the set. In our example, we could
have specified S also as

S = { x : x is a sibling of Richard}.

When we say that sets are independent of the way they are specified, we
mean that the elements of a set are all that matters. For instance, it so happens

6
1.2. SOME IMPORTANT SETS

that
{Nicole, Jacob},
{ x : is a niece or nephew of Richard}, and
{ x : is a child of Ruth}
are three ways of specifying one and the same set.
Saying that sets are considered independently of the order of their ele-
ments and their multiplicity is a fancy way of saying that
{Nicole, Jacob} and
{Jacob, Nicole}
are two ways of specifying the same set; and that
{Nicole, Jacob} and
{Jacob, Nicole, Nicole}
are also two ways of specifying the same set. In other words, all that matters
is which elements a set has. The elements of a set are not ordered and each el-
ement occurs only once. When we specify or describe a set, elements may occur
multiple times and in different orders, but any descriptions that only differ in
the order of elements or in how many times elements are listed describes the
same set.
Definition 1.3 (Extensionality). If X and Y are sets, then X and Y are identical,
X = Y, iff every element of X is also an element of Y, and vice versa.
Extensionality gives us a way for showing that sets are identical: to show
that X = Y, show that whenever x ∈ X then also x ∈ Y, and whenever y ∈ Y
then also y ∈ X.

1.2 Some Important Sets


Example 1.4. Mostly we’ll be dealing with sets that have mathematical objects
as members. You will remember the various sets of numbers: N is the set of
natural numbers {0, 1, 2, 3, . . . }; Z the set of integers,
{. . . , −3, −2, −1, 0, 1, 2, 3, . . . };
Q the set of rational numbers (Q = {z/n : z ∈ Z, n ∈ N, n 6= 0}); and R the
set of real numbers. These are all infinite sets, that is, they each have infinitely
many elements. As it turns out, N, Z, Q have the same number of elements,
while R has a whole bunch more—N, Z, Q are “enumerable and infinite”
whereas R is “non-enumerable”.
We’ll sometimes also use the set of positive integers Z+ = {1, 2, 3, . . . } and
the set containing just the first two natural numbers B = {0, 1}.

Release: (None) ((None)) 7


CHAPTER 1. SETS

Example 1.5 (Strings). Another interesting example is the set A∗ of finite strings
over an alphabet A: any finite sequence of elements of A is a string over A.
We include the empty string Λ among the strings over A, for every alphabet A.
For instance,

B∗ = {Λ, 0, 1, 00, 01, 10, 11,


000, 001, 010, 011, 100, 101, 110, 111, 0000, . . .}.
If x = x1 . . . xn ∈ A∗ is a string consisting of n “letters” from A, then we say
length of the string is n and write len( x ) = n.
Example 1.6 (Infinite sequences). For any set A we may also consider the
set Aω of infinite sequences of elements of A. An infinite sequence a1 a2 a3 a4 . . .
consists of a one-way infinite list of objects, each one of which is an element
of A.

1.3 Subsets
Sets are made up of their elements, and every element of a set is a part of that
set. But there is also a sense that some of the elements of a set taken together
are a “part of” that set. For instance, the number 2 is part of the set of integers,
but the set of even numbers is also a part of the set of integers. It’s important
to keep those two senses of being part of a set separate.
Definition 1.7 (Subset). If every element of a set X is also an element of Y,
then we say that X is a subset of Y, and write X ⊆ Y.
Example 1.8. First of all, every set is a subset of itself, and ∅ is a subset of
every set. The set of even numbers is a subset of the set of natural numbers.
Also, { a, b} ⊆ { a, b, c}.
But { a, b, e} is not a subset of { a, b, c}.
Note that a set may contain other sets, not just as subsets but as elements!
In particular, a set may happen to both be an element and a subset of another,
e.g., {0} ∈ {0, {0}} and also {0} ⊆ {0, {0}}.
Extensionality gives a criterion of identity for sets: X = Y iff every element
of X is also an element of Y and vice versa. The definition of “subset” defines
X ⊆ Y precisely as the first half of this criterion: every element of X is also
an element of Y. Of course the definition also applies if we switch X and Y:
Y ⊆ X iff every element of Y is also an element of X. And that, in turn, is
exactly the “vice versa” part of extensionality. In other words, extensionality
amounts to: X = Y iff X ⊆ Y and Y ⊆ X.
Definition 1.9 (Power Set). The set consisting of all subsets of a set X is called
the power set of X, written ℘( X ).
℘( X ) = {Y : Y ⊆ X }

8 Release: (None) ((None))


1.4. UNIONS AND INTERSECTIONS

Figure 1.1: The union X ∪ Y of two sets is set of elements of X together with
those of Y.

Example 1.10. What are all the possible subsets of { a, b, c}? They are: ∅,
{ a}, {b}, {c}, { a, b}, { a, c}, {b, c}, { a, b, c}. The set of all these subsets is
℘({ a, b, c}):
℘({ a, b, c}) = {∅, { a}, {b}, {c}, { a, b}, {b, c}, { a, c}, { a, b, c}}

1.4 Unions and Intersections


We can define new sets by abstraction, and the property used to define the
new set can mention sets we’ve already defined. So for instance, if X and Y
are sets, the set { x : x ∈ X ∨ x ∈ Y } defines a set which consists of all those
objects which are elements of either X or Y, i.e., it’s the set that combines the
elements of X and Y. This operation on sets—combining them—is very useful
and common, and so we give it a name and a symbol.
Definition 1.11 (Union). The union of two sets X and Y, written X ∪ Y, is the
set of all things which are elements of X, Y, or both.

X ∪ Y = {x : x ∈ X ∨ x ∈ Y}

Example 1.12. Since the multiplicity of elements doesn’t matter, the union of
two sets which have an element in common contains that element only once,
e.g., { a, b, c} ∪ { a, 0, 1} = { a, b, c, 0, 1}.
The union of a set and one of its subsets is just the bigger set: { a, b, c} ∪
{ a} = { a, b, c}.
The union of a set with the empty set is identical to the set: { a, b, c} ∪ ∅ =
{ a, b, c}.
The operation that forms the set of all elements that X and Y have in com-
mon is called their intersection.

Release: (None) ((None)) 9


CHAPTER 1. SETS

Figure 1.2: The intersection X ∩ Y of two sets is the set of elements they have
in common.

Definition 1.13 (Intersection). The intersection of two sets X and Y, written


X ∩ Y, is the set of all things which are elements of both X and Y.

X ∩ Y = {x : x ∈ X ∧ x ∈ Y}

Two sets are called disjoint if their intersection is empty. This means they have
no elements in common.

Example 1.14. If two sets have no elements in common, their intersection is


empty: { a, b, c} ∩ {0, 1} = ∅.
If two sets do have elements in common, their intersection is the set of all
those: { a, b, c} ∩ { a, b, d} = { a, b}.
The intersection of a set with one of its subsets is just the smaller set:
{ a, b, c} ∩ { a, b} = { a, b}.
The intersection of any set with the empty set is empty: { a, b, c} ∩ ∅ = ∅.

We can also form the union or intersection of more than two sets. An
elegant way of dealing with this in general is the following: suppose you
collect all the sets you want to form the union (or intersection) of into a single
set. Then we can define the union of all our original sets as the set of all objects
which belong to at least one element of the set, and the intersection as the set
of all objects which belong to every element of the set.
S
Definition 1.15. If Z is a set of sets, then Z is the set of elements of elements
of Z:
[
Z = { x : x belongs to an element of Z }, i.e.,
[
Z = { x : there is a Y ∈ Z so that x ∈ Y }

10 Release: (None) ((None))


1.5. PAIRS, TUPLES, CARTESIAN PRODUCTS

Figure 1.3: The difference X \ Y of two sets is the set of those elements of X
which are not also elements of Y.

T
Definition 1.16. If Z is a set of sets, then Z is the set of objects which all
elements of Z have in common:
\
Z = { x : x belongs to every element of Z }, i.e.,
\
Z = { x : for all Y ∈ Z, x ∈ Y }

Example 1.17. Suppose Z = {{ a, b}, { a, d, e}, { a, d}}. Then Z = { a, b, d, e}


S

and Z = { a}.
T

We could also do the same for a sequence of sets X1 , X2 , . . .


[
Xi = { x : x belongs to one of the Xi }
i
\
Xi = { x : x belongs to every Xi }.
i

Definition 1.18 (Difference). The difference X \ Y is the set of all elements of X


which are not also elements of Y, i.e.,

X \ Y = { x : x ∈ X and x ∈
/ Y }.

1.5 Pairs, Tuples, Cartesian Products


Sets have no order to their elements. We just think of them as an unordered
collection. So if we want to represent order, we use ordered pairs h x, yi. In
an unordered pair { x, y}, the order does not matter: { x, y} = {y, x }. In an
ordered pair, it does: if x 6= y, then h x, yi 6= hy, x i.
Sometimes we also want ordered sequences of more than two objects,
e.g., triples h x, y, zi, quadruples h x, y, z, ui, and so on. In fact, we can think of

Release: (None) ((None)) 11


CHAPTER 1. SETS

triples as special ordered pairs, where the first element is itself an ordered
pair: h x, y, zi is short for hh x, yi, zi. The same is true for quadruples: h x, y, z, ui
is short for hhh x, yi, zi, ui, and so on. In general, we talk of ordered n-tuples
h x1 , . . . , x n i.

Definition 1.19 (Cartesian product). Given sets X and Y, their Cartesian prod-
uct X × Y is {h x, yi : x ∈ X and y ∈ Y }.

Example 1.20. If X = {0, 1}, and Y = {1, a, b}, then their product is

X × Y = {h0, 1i, h0, ai, h0, bi, h1, 1i, h1, ai, h1, bi}.

Example 1.21. If X is a set, the product of X with itself, X × X, is also writ-


ten X 2 . It is the set of all pairs h x, yi with x, y ∈ X. The set of all triples h x, y, zi
is X 3 , and so on. We can give an inductive definition:

X1 = X
X k +1 = X k × X

Proposition 1.22. If X has n elements and Y has m elements, then X × Y has n · m


elements.

Proof. For every element x in X, there are m elements of the form h x, yi ∈


X × Y. Let Yx = {h x, yi : y ∈ Y }. Since whenever x1 6= x2 , h x1 , yi 6= h x2 , yi,
Yx1 ∩ Yx2 = ∅. But if X = { x1 , . . . , xn }, then X × Y = Yx1 ∪ · · · ∪ Yxn , and so
has n · m elements.
To visualize this, arrange the elements of X × Y in a grid:

Yx1 = {h x1 , y1 i h x1 , y2 i ... h x1 , ym i}
Yx2 = {h x2 , y1 i h x2 , y2 i ... h x2 , ym i}
.. ..
. .
Yxn = {h xn , y1 i h xn , y2 i . . . h xn , ym i}

Since the xi are all different, and the y j are all different, no two of the pairs in
this grid are the same, and there are n · m of them.

Example 1.23. If X is a set, a word over X is any sequence of elements of X. A


sequence can be thought of as an n-tuple of elements of X. For instance, if X =
{ a, b, c}, then the sequence “bac” can be thought of as the triple hb, a, ci. Words,
i.e., sequences of symbols, are of crucial importance in computer science, of
course. By convention, we count elements of X as sequences of length 1, and
∅ as the sequence of length 0. The set of all words over X then is

X ∗ = {∅} ∪ X ∪ X 2 ∪ X 3 ∪ . . .

12 Release: (None) ((None))


1.6. RUSSELL’S PARADOX

1.6 Russell’s Paradox


We said that one can define sets by specifying a property that its elements
share, e.g., defining the set of Richard’s siblings as

S = { x : x is a sibling of Richard}.

In the very general context of mathematics one must be careful, however: not
every property lends itself to comprehension. Some properties do not define
sets. If they did, we would run into outright contradictions. One example of
such a case is Russell’s Paradox.
Sets may be elements of other sets—for instance, the power set of a set X
is made up of sets. And so it makes sense, of course, to ask or investigate
whether a set is an element of another set. Can a set be a member of itself?
Nothing about the idea of a set seems to rule this out. For instance, surely all
sets form a collection of objects, so we should be able to collect them into a
single set—the set of all sets. And it, being a set, would be an element of the
set of all sets.
Russell’s Paradox arises when we consider the property of not having itself
as an element. The set of all sets does not have this property, but all sets
we have encountered so far have it. N is not an element of N, since it is a
set, not a natural number. ℘( X ) is generally not an element of ℘( X ); e.g.,
℘(R) ∈ / ℘(R) since it is a set of sets of real numbers, not a set of real numbers.
What if we suppose that there is a set of all sets that do not have themselves
as an element? Does
R = {x : x ∈ / x}
exist?
If R exists, it makes sense to ask if R ∈ R or not—it must be either ∈ R
or ∈/ R. Suppose the former is true, i.e., R ∈ R. R was defined as the set of
all sets that are not elements of themselves, and so if R ∈ R, then R does not
have this defining property of R. But only sets that have this property are in R,
hence, R cannot be an element of R, i.e., R ∈
/ R. But R can’t both be and not be
an element of R, so we have a contradiction.
Since the assumption that R ∈ R leads to a contradiction, we have R ∈ / R.
But this also leads to a contradiction! For if R ∈
/ R, it does have the defining
property of R, and so would be an element of R just like all the other non-self-
containing sets. And again, it can’t both not be and be an element of R.

Problems
Problem 1.1. Show that there is only one empty set, i.e., show that if X and Y
are sets without members, then X = Y.

Problem 1.2. List all subsets of { a, b, c, d}.

Release: (None) ((None)) 13


CHAPTER 1. SETS

Problem 1.3. Show that if X has n elements, then ℘( X ) has 2n elements.

Problem 1.4. Prove rigorously that if X ⊆ Y, then X ∪ Y = Y.

Problem 1.5. Prove rigorously that if X ⊆ Y, then X ∩ Y = X.

Problem 1.6. List all elements of {1, 2, 3}3 .

Problem 1.7. Show, by induction on k, that for all k ≥ 1, if X has n elements,


then X k has nk elements.

14 Release: (None) ((None))


Chapter 2

Relations

2.1 Relations as Sets


You will no doubt remember some interesting relations between objects of
some of the sets we’ve mentioned. For instance, numbers come with an order
relation < and from the theory of whole numbers the relation of divisibility
without remainder (usually written n | m) may be familar. There is also the
relation is identical with that every object bears to itself and to no other thing.
But there are many more interesting relations that we’ll encounter, and even
more possible relations. Before we review them, we’ll just point out that we
can look at relations as a special sort of set. For this, first recall what a pair is: if
a and b are two objects, we can combine them into the ordered pair h a, bi. Note
that for ordered pairs the order does matter, e.g, h a, bi 6= hb, ai, in contrast to
unordered pairs, i.e., 2-element sets, where { a, b} = {b, a}.
If X and Y are sets, then the Cartesian product X × Y of X and Y is the set of
all pairs h a, bi with a ∈ X and b ∈ Y. In particular, X 2 = X × X is the set of all
pairs from X.
Now consider a relation on a set, e.g., the <-relation on the set N of natural
numbers, and consider the set of all pairs of numbers hn, mi where n < m, i.e.,

R = {hn, mi : n, m ∈ N and n < m}.

Then there is a close connection between the number n being less than a num-
ber m and the corresponding pair hn, mi being a member of R, namely, n < m
if and only if hn, mi ∈ R. In a sense we can consider the set R to be the <-
relation on the set N. In the same way we can construct a subset of N2 for
any relation between numbers. Conversely, given any set of pairs of numbers
S ⊆ N2 , there is a corresponding relation between numbers, namely, the re-
lationship n bears to m if and only if hn, mi ∈ S. This justifies the following
definition:

15
CHAPTER 2. RELATIONS

Definition 2.1 (Binary relation). A binary relation on a set X is a subset of X 2 .


If R ⊆ X 2 is a binary relation on X and x, y ∈ X, we write Rxy (or xRy) for
h x, yi ∈ R.
Example 2.2. The set N2 of pairs of natural numbers can be listed in a 2-
dimensional matrix like this:
h0, 0i h0, 1i h0, 2i h0, 3i ...
h1, 0i h1, 1i h1, 2i h1, 3i ...
h2, 0i h2, 1i h2, 2i h2, 3i ...
h3, 0i h3, 1i h3, 2i h3, 3i ...
.. .. .. .. ..
. . . . .

The subset consisting of the pairs lying on the diagonal, i.e.,

{h0, 0i, h1, 1i, h2, 2i, . . . },

is the identity relation on N. (Since the identity relation is popular, let’s define
IdX = {h x, x i : x ∈ X } for any set X.) The subset of all pairs lying above the
diagonal, i.e.,

L = {h0, 1i, h0, 2i, . . . , h1, 2i, h1, 3i, . . . , h2, 3i, h2, 4i, . . .},

is the less than relation, i.e., Lnm iff n < m. The subset of pairs below the
diagonal, i.e.,

G = {h1, 0i, h2, 0i, h2, 1i, h3, 0i, h3, 1i, h3, 2i, . . . },

is the greater than relation, i.e., Gnm iff n > m. The union of L with I, K = L ∪ I,
is the less than or equal to relation: Knm iff n ≤ m. Similarly, H = G ∪ I is the
greater than or equal to relation. L, G, K, and H are special kinds of relations
called orders. L and G have the property that no number bears L or G to itself
(i.e., for all n, neither Lnn nor Gnn). Relations with this property are called
irreflexive, and, if they also happen to be orders, they are called strict orders.
Although orders and identity are important and natural relations, it should
be emphasized that according to our definition any subset of X 2 is a relation
on X, regardless of how unnatural or contrived it seems. In particular, ∅ is a
relation on any set (the empty relation, which no pair of elements bears), and
X 2 itself is a relation on X as well (one which every pair bears), called the
universal relation. But also something like E = {hn, mi : n > 5 or m × n ≥ 34}
counts as a relation.

2.2 Special Properties of Relations


Some kinds of relations turn out to be so common that they have been given
special names. For instance, ≤ and ⊆ both relate their respective domains

16 Release: (None) ((None))


2.3. ORDERS

(say, N in the case of ≤ and ℘( X ) in the case of ⊆) in similar ways. To get


at exactly how these relations are similar, and how they differ, we categorize
them according to some special properties that relations can have. It turns out
that (combinations of) some of these special properties are especially impor-
tant: orders and equivalence relations.

Definition 2.3 (Reflexivity). A relation R ⊆ X 2 is reflexive iff, for every x ∈ X,


Rxx.

Definition 2.4 (Transitivity). A relation R ⊆ X 2 is transitive iff, whenever Rxy


and Ryz, then also Rxz.

Definition 2.5 (Symmetry). A relation R ⊆ X 2 is symmetric iff, whenever Rxy,


then also Ryx.

Definition 2.6 (Anti-symmetry). A relation R ⊆ X 2 is anti-symmetric iff, when-


ever both Rxy and Ryx, then x = y (or, in other words: if x 6= y then either
¬ Rxy or ¬ Ryx).
In a symmetric relation, Rxy and Ryx always hold together, or neither
holds. In an anti-symmetric relation, the only way for Rxy and Ryx to hold to-
gether is if x = y. Note that this does not require that Rxy and Ryx holds when
x = y, only that it isn’t ruled out. So an anti-symmetric relation can be reflex-
ive, but it is not the case that every anti-symmetric relation is reflexive. Also
note that being anti-symmetric and merely not being symmetric are different
conditions. In fact, a relation can be both symmetric and anti-symmetric at the
same time (e.g., the identity relation is).

Definition 2.7 (Connectivity). A relation R ⊆ X 2 is connected if for all x, y ∈ X,


if x 6= y, then either Rxy or Ryx.

Definition 2.8 (Partial order). A relation R ⊆ X 2 that is reflexive, transitive,


and anti-symmetric is called a partial order.

Definition 2.9 (Linear order). A partial order that is also connected is called a
linear order.

Definition 2.10 (Equivalence relation). A relation R ⊆ X 2 that is reflexive,


symmetric, and transitive is called an equivalence relation.

2.3 Orders
Very often we are interested in comparisons between objects, where one object
may be less or equal or greater than another in a certain respect. Size is the
most obvious example of such a comparative relation, or order. But not all
such relations are alike in all their properties. For instance, some comparative
relations require any two objects to be comparable, others don’t. (If they do,

Release: (None) ((None)) 17


CHAPTER 2. RELATIONS

we call them linear or total.) Some include identity (like ≤) and some exclude
it (like <). Let’s get some order into all this.
Definition 2.11 (Preorder). A relation which is both reflexive and transitive is
called a preorder.
Definition 2.12 (Partial order). A preorder which is also anti-symmetric is
called a partial order.
Definition 2.13 (Linear order). A partial order which is also connected is
called a total order or linear order.
Example 2.14. Every linear order is also a partial order, and every partial or-
der is also a preorder, but the converses don’t hold. The universal relation
on X is a preorder, since it is reflexive and transitive. But, if X has more than
one element, the universal relation is not anti-symmetric, and so not a partial
order. For a somewhat less silly example, consider the no longer than relation
4 on B∗ : x 4 y iff len( x ) ≤ len(y). This is a preorder (reflexive and transitive),
and even connected, but not a partial order, since it is not anti-symmetric. For
instance, 01 4 10 and 10 4 01, but 01 6= 10.
The relation of divisibility without remainder gives us an example of a partial
order which isn’t a linear order: for integers n, m, we say n (evenly) divides
m, in symbols: n | m, if there is some k so that m = kn. On N, this is a partial
order, but not a linear order: for instance, 2 - 3 and also 3 - 2. Considered as a
relation on Z, divisibility is only a preorder since anti-symmetry fails: 1 | −1
and −1 | 1 but 1 6= −1. Another important partial order is the relation ⊆ on a
set of sets.
Notice that the examples L and G from ??, although we said there that
they were called “strict orders,” are not linear orders even though they are
connected (they are not reflexive). But there is a close connection, as we will
see momentarily.
Definition 2.15 (Irreflexivity). A relation R on X is called irreflexive if, for all
x ∈ X, ¬ Rxx.
Definition 2.16 (Asymmetry). A relation R on X is called asymmetric if for no
pair x, y ∈ X we have Rxy and Ryx.
Definition 2.17 (Strict order). A strict order is a relation which is irreflexive,
asymmetric, and transitive.
Definition 2.18 (Strict linear order). A strict order which is also connected is
called a strict linear order.
A strict order on X can be turned into a partial order by adding the di-
agonal IdX , i.e., adding all the pairs h x, x i. (This is called the reflexive closure
of R.) Conversely, starting from a partial order, one can get a strict order by
removing IdX .

18 Release: (None) ((None))


2.4. GRAPHS

Proposition 2.19. 1. If R is a strict (linear) order on X, then R+ = R ∪ IdX is


a partial order (linear order).

2. If R is a partial order (linear order) on X, then R− = R \ IdX is a strict (linear)


order.

Proof. 1. Suppose R is a strict order, i.e., R ⊆ X 2 and R is irreflexive, asym-


metric, and transitive. Let R+ = R ∪ IdX . We have to show that R+ is
reflexive, antisymmetric, and transitive.
R+ is clearly reflexive, since for all x ∈ X, h x, x i ∈ IdX ⊆ R+ .
To show R+ is antisymmetric, suppose R+ xy and R+ yx, i.e., h x, yi and
hy, x i ∈ R+ , and x 6= y. Since h x, yi ∈ R ∪ IdX , but h x, yi ∈
/ IdX , we must
have h x, yi ∈ R, i.e., Rxy. Similarly we get that Ryx. But this contradicts
the assumption that R is asymmetric.
Now suppose that R+ xy and R+ yz. If both h x, yi ∈ R and hy, zi ∈ R, it
follows that h x, zi ∈ R since R is transitive. Otherwise, either h x, yi ∈
IdX , i.e., x = y, or hy, zi ∈ IdX , i.e., y = z. In the first case, we have that
R+ yz by assumption, x = y, hence R+ xz. Similarly in the second case.
In either case, R+ xz, thus, R+ is also transitive.
If R is connected, then for all x 6= y, either Rxy or Ryx, i.e., either
h x, yi ∈ R or hy, x i ∈ R. Since R ⊆ R+ , this remains true of R+ , so
R+ is connected as well.

2. Exercise.

Example 2.20. ≤ is the linear order corresponding to the strict linear order <.
⊆ is the partial order corresponding to the strict order (.

2.4 Graphs
A graph is a diagram in which points—called “nodes” or “vertices” (plural of
“vertex”)—are connected by edges. Graphs are a ubiquitous tool in discrete
mathematics and in computer science. They are incredibly useful for repre-
senting, and visualizing, relationships and structures, from concrete things
like networks of various kinds to abstract structures such as the possible out-
comes of decisions. There are many different kinds of graphs in the literature
which differ, e.g., according to whether the edges are directed or not, have la-
bels or not, whether there can be edges from a node to the same node, multiple
edges between the same nodes, etc. Directed graphs have a special connection
to relations.

Definition 2.21 (Directed graph). A directed graph G = hV, Ei is a set of ver-


tices V and a set of edges E ⊆ V 2 .

Release: (None) ((None)) 19


CHAPTER 2. RELATIONS

According to our definition, a graph just is a set together with a relation


on that set. Of course, when talking about graphs, it’s only natural to expect
that they are graphically represented: we can draw a graph by connecting two
vertices v1 and v2 by an arrow iff hv1 , v2 i ∈ E. The only difference between a
relation by itself and a graph is that a graph specifies the set of vertices, i.e., a
graph may have isolated vertices. The important point, however, is that every
relation R on a set X can be seen as a directed graph h X, Ri, and conversely, a
directed graph hV, Ei can be seen as a relation E ⊆ V 2 with the set V explicitly
specified.

Example 2.22. The graph hV, Ei with V = {1, 2, 3, 4} and E = {h1, 1i, h1, 2i,
h1, 3i, h2, 3i} looks like this:

1 2 4

This is a different graph than hV 0 , Ei with V 0 = {1, 2, 3}, which looks like this:

1 2

2.5 Operations on Relations


It is often useful to modify or combine relations. We’ve already used the union
of relations above (which is just the union of two relations considered as sets
of pairs). Here are some other ways:

Definition 2.23. Let R, S ⊆ X 2 be relations and Y a set.

1. The inverse R−1 of R is R−1 = {hy, x i : h x, yi ∈ R}.

2. The relative product R | S of R and S is

( R | S) = {h x, zi : for some y, Rxy and Syz}

20 Release: (None) ((None))


2.5. OPERATIONS ON RELATIONS

3. The restriction R  Y of R to Y is R ∩ Y 2

4. The application R[Y ] of R to Y is

R[Y ] = {y : for some x ∈ Y, Rxy}

Example 2.24. Let S ⊆ Z2 be the successor relation on Z, i.e., the set of pairs
h x, yi where x + 1 = y, for x, y ∈ Z. Sxy holds iff y is the successor of x.
1. The inverse S−1 of S is the predecessor relation, i.e., S−1 xy iff x − 1 = y.

2. The relative product S | S is the relation x bears to y if x + 2 = y.

3. The restriction of S to N is the successor relation on N.

4. The application of S to a set, e.g., S[{1, 2, 3}] is {2, 3, 4}.

Definition 2.25 (Transitive closure). The transitive closure R+ of a relation R ⊆


X 2 is R+ = i∞=1 Ri where R1 = R and Ri+1 = Ri | R.
S

The reflexive transitive closure of R is R∗ = R+ ∪ IdX .

Example 2.26. Take the successor relation S ⊆ Z2 . S2 xy iff x + 2 = y, S3 xy iff


x + 3 = y, etc. So R∗ xy iff for some i ≥ 1, x + i = y. In other words, S+ xy iff
x < y (and R∗ xy iff x ≤ y).

Problems
Problem 2.1. List the elements of the relation ⊆ on the set ℘({ a, b, c}).

Problem 2.2. Give examples of relations that are (a) reflexive and symmetric
but not transitive, (b) reflexive and anti-symmetric, (c) anti-symmetric, transi-
tive, but not reflexive, and (d) reflexive, symmetric, and transitive. Do not use
relations on numbers or sets.

Problem 2.3. Complete the proof of ??, i.e., prove that if R is a partial order
on X, then R− = R \ IdX is a strict order.

Problem 2.4. Consider the less-than-or-equal-to relation ≤ on the set {1, 2, 3, 4}


as a graph and draw the corresponding diagram.

Problem 2.5. Show that the transitive closure of R is in fact transitive.

Release: (None) ((None)) 21


Chapter 3

Functions

3.1 Basics
A function is a mapping which pairs each object of a given set with a single
partner in another set. For instance, the operation of adding 1 defines a func-
tion: each number n is paired with a unique number n + 1. More generally,
functions may take pairs, triples, etc., of inputs and returns some kind of out-
put. Many functions are familiar to us from basic arithmetic. For instance,
addition and multiplication are functions. They take in two numbers and re-
turn a third. In this mathematical, abstract sense, a function is a black box:
what matters is only what output is paired with what input, not the method
for calculating the output.

Definition 3.1 (Function). A function f : X → Y is a mapping of each element


of X to an element of Y. We call X the domain of f and Y the codomain of f .
The elements of X are called inputs or arguments of f , and the element of Y
that is paired with an argument x by f is called the value of f for argument x,
written f ( x ).
The range ran( f ) of f is the subset of the codomain consisting of the values
of f for some argument; ran( f ) = { f ( x ) : x ∈ X }.

Example 3.2. Multiplication takes pairs of natural numbers as inputs and


maps them to natural numbers as outputs, so goes from N × N (the domain)
to N (the codomain). As it turns out, the range is also N, since every n ∈ N
is n × 1.

Multiplication is a function because it pairs each input—each pair of natu-


ral numbers—with a single output: × : N2 → N. By contrast, the square root
√ N is not
operation applied to the domain √ functional, since each positive inte-
ger n has two square roots: n and√− n. We can make it functional by only
returning the positive square root: : N → R. The relation that pairs each

22
3.1. BASICS

Figure 3.1: A function is a mapping of each element of one set to an element of


another. An arrow points from an argument in the domain to the correspond-
ing value in the codomain.

student in a class with their final grade is a function—no student can get two
different final grades in the same class. The relation that pairs each student in
a class with their parents is not a function—generally each student will have
at least two parents.
We can define functions by specifying in some precise way what the value
of the function is for every possible argment. Different ways of doing this are
by giving a formula, describing a method for computing the value, or listing
the values for each argument. However functions are defined, we must make
sure that for each argment we specify one, and only one, value.

Example 3.3. Let f : N → N be defined such that f ( x ) = x + 1. This is a


definition that specifies f as a function which takes in natural numbers and
outputs natural numbers. It tells us that, given a natural number x, f will
output its successor x + 1. In this case, the codomain N is not the range of f ,
since the natural number 0 is not the successor of any natural number. The
range of f is the set of all positive integers, Z+ .

Example 3.4. Let g : N → N be defined such that g( x ) = x + 2 − 1. This tells


us that g is a function which takes in natural numbers and outputs natural
numbers. Given a natural number n, g will output the predecessor of the
successor of the successor of x, i.e., x + 1. Despite their different definitions, g
and f are the same function.

Functions f and g defined above are the same because for any natural
number x, x + 2 − 1 = x + 1. f and g pair each natural number with the
same output. The definitions for f and g specify the same mapping by means
of different equations, and so count as the same function.

Example 3.5. We can also define functions by cases. For instance, we could
define h : N → N by (
x
if x is even
h( x ) = 2x+1
2 if x is odd.

Release: (None) ((None)) 23


CHAPTER 3. FUNCTIONS

Figure 3.2: A surjective function has every element of the codomain as a value.

Figure 3.3: An injective function never maps two different arguments to the
same value.

Since every natural number is either even or odd, the output of this function
will always be a natural number. Just remember that if you define a function
by cases, every possible input must fall into exactly one case. In some cases,
this will require a a proof that the cases are exhaustive and exclusive.

3.2 Kinds of Functions


Definition 3.6 (Surjective function). A function f : X → Y is surjective iff Y
is also the range of f , i.e., for every y ∈ Y there is at least one x ∈ X such
that f ( x ) = y.
If you want to show that a function is surjective, then you need to show
that every object in the codomain is the output of the function given some
input or other.
Definition 3.7 (Injective function). A function f : X → Y is injective iff for each
y ∈ Y there is at most one x ∈ X such that f ( x ) = y.
Any function pairs each possible input with a unique output. An injective
function has a unique input for each possible output. If you want to show that
a function f is injective, you need to show that for any elements x and x 0 of
the domain, if f ( x ) = f ( x 0 ), then x = x 0 .

24 Release: (None) ((None))


3.3. INVERSES OF FUNCTIONS

Figure 3.4: A bijective function uniquely pairs the elements of the codomain
with those of the domain.

An example of a function which is neither injective, nor surjective, is the


constant function f : N → N where f ( x ) = 1.
An example of a function which is both injective and surjective is the iden-
tity function f : N → N where f ( x ) = x.
The successor function f : N → N where f ( x ) = x + 1 is injective, but not
surjective.
The function (
x
if x is even
f ( x ) = 2x+1
2 if x is odd.
is surjective, but not injective.

Definition 3.8 (Bijection). A function f : X → Y is bijective iff it is both surjec-


tive and injective. We call such a function a bijection from X to Y (or between
X and Y).

3.3 Inverses of Functions


One obvious question about functions is whether a given mapping can be
“reversed.” For instance, the successor function f ( x ) = x + 1 can be reversed
in the sense that the function g(y) = y − 1 “undoes” what f does. But we
must be careful: While the definition of g defines a function Z → Z, it does
not define a function N → N (g(0) ∈ / N). So even in simple cases, it is not
quite obvious if functions can be reversed, and that it may depend on the
domain and codomain. Let’s give a precise definition.

Definition 3.9. A function g : Y → X is an inverse of a function f : X → Y if


f ( g(y)) = y and g( f ( x )) = x for all x ∈ X and y ∈ Y.

When do functions have inverses? A good candidate for an inverse of


f : X → Y is g : Y → X “defined by”

g(y) = “the” x such that f ( x ) = y.

Release: (None) ((None)) 25


CHAPTER 3. FUNCTIONS

Figure 3.5: The composition g ◦ f of two functions f and g.

The scare quotes around “defined by” suggest that this is not a definition. At
least, it is not in general. For in order for this definition to specify a function,
there has to be one and only one x such that f ( x ) = y—the output of g has to
be uniquely specified. Moreover, it has to be specified for every y ∈ Y. If there
are x1 and x2 ∈ X with x1 6= x2 but f ( x1 ) = f ( x2 ), then g(y) would not be
uniquely specified for y = f ( x1 ) = f ( x2 ). And if there is no x at all such that
f ( x ) = y, then g(y) is not specified at all. In other words, for g to be defined,
f has to be injective and surjective.

Proposition 3.10. If f : X → Y is bijective, f has a unique inverse f −1 : Y → X.

Proof. Exercise.

3.4 Composition of Functions


We have already seen that the inverse f −1 of a bijective function f is itself
a function. It is also possible to compose functions f and g to define a new
function by first applying f and then g. Of course, this is only possible if the
ranges and domains match, i.e., the range of f must be a subset of the domain
of g.

Definition 3.11 (Composition). Let f : X → Y and g : Y → Z. The composition


of f with g is the function ( g ◦ f ) : X → Z, where ( g ◦ f )( x ) = g( f ( x )).

The function ( g ◦ f ) : X → Z pairs each member of X with a member of Z.


We specify which member of Z a member of X is paired with as follows—
given an input x ∈ X, first apply the function f to x, which will output some
y ∈ Y. Then apply the function g to y, which will output some z ∈ Z.

Example 3.12. Consider the functions f ( x ) = x + 1, and g( x ) = 2x. What


function do you get when you compose these two? ( g ◦ f )( x ) = g( f ( x )). So
that means for every natural number you give this function, you first add one,

26 Release: (None) ((None))


3.5. ISOMORPHISM

and then you multiply the result by two. So their composition is ( g ◦ f )( x ) =


2( x + 1).

3.5 Isomorphism
An isomorphism is a bijection that preserves the structure of the sets it re-
lates, where structure is a matter of the relationships that obtain between
the elements of the sets. Consider the following two sets X = {1, 2, 3} and
Y = {4, 5, 6}. These sets are both structured by the relations successor, less
than, and greater than. An isomorphism between the two sets is a bijection
that preserves those structures. So a bijective function f : X → Y is an isomor-
phism if, i < j iff f (i ) < f ( j), i > j iff f (i ) > f ( j), and j is the successor of i iff
f ( j) is the successor of f (i ).

Definition 3.13 (Isomorphism). Let U be the pair h X, Ri and V be the pair


hY, Si such that X and Y are sets and R and S are relations on X and Y re-
spectively. A bijection f from X to Y is an isomorphism from U to V iff it pre-
serves the relational structure, that is, for any x1 and x2 in X, h x1 , x2 i ∈ R iff
h f ( x1 ), f ( x2 )i ∈ S.

Example 3.14. Consider the following two sets X = {1, 2, 3} and Y = {4, 5, 6},
and the relations less than and greater than. The function f : X → Y where
f ( x ) = 7 − x is an isomorphism between h X, <i and hY, >i.

3.6 Partial Functions


It is sometimes useful to relax the definition of function so that it is not re-
quired that the output of the function is defined for all possible inputs. Such
mappings are called partial functions.

Definition 3.15. A partial function f : X → 7 Y is a mapping which assigns to


every element of X at most one element of Y. If f assigns an element of Y to
x ∈ X, we say f ( x ) is defined, and otherwise undefined. If f ( x ) is defined, we
write f ( x ) ↓, otherwise f ( x ) ↑. The domain of a partial function f is the subset
of X where it is defined, i.e., dom( f ) = { x : f ( x ) ↓}.

Example 3.16. Every function f : X → Y is also a partial function. Partial


functions that are defined everywhere on X—i.e., what we so far have simply
called a function—are also called total functions.

Example 3.17. The partial function f : R →7 R given by f ( x ) = 1/x is unde-


fined for x = 0, and defined everywhere else.

Release: (None) ((None)) 27


CHAPTER 3. FUNCTIONS

3.7 Functions and Relations


A function which maps elements of X to elements of Y obviously defines a
relation between X and Y, namely the relation which holds between x and
y iff f ( x ) = y. In fact, we might even—if we are interested in reducing the
building blocks of mathematics for instance—identify the function f with this
relation, i.e., with a set of pairs. This then raises the question: which relations
define functions in this way?
Definition 3.18 (Graph of a function). Let f : X →7 Y be a partial function. The
graph of f is the relation R f ⊆ X × Y defined by
R f = {h x, yi : f ( x ) = y}.
Proposition 3.19. Suppose R ⊆ X × Y has the property that whenever Rxy and
Rxy0 then y = y0 . Then R is the graph of the partial function f : X → 7 Y defined by:
if there is a y such that Rxy, then f ( x ) = y, otherwise f ( x ) ↑. If R is also serial, i.e.,
for each x ∈ X there is a y ∈ Y such that Rxy, then f is total.

Proof. Suppose there is a y such that Rxy. If there were another y0 6= y such
that Rxy0 , the condition on R would be violated. Hence, if there is a y such
that Rxy, that y is unique, and so f is well-defined. Obviously, R f = R and f
is total if R is serial.

Problems
Problem 3.1. Show that if f is bijective, an inverse g of f exists, i.e., define
such a g, show that it is a function, and show that it is an inverse of f , i.e.,
f ( g(y)) = y and g( f ( x )) = x for all x ∈ X and y ∈ Y.
Problem 3.2. Show that if f : X → Y has an inverse g, then f is bijective.
Problem 3.3. Show that if g : Y → X and g0 : Y → X are inverses of f : X → Y,
then g = g0 , i.e., for all y ∈ Y, g(y) = g0 (y).
Problem 3.4. Show that if f : X → Y and g : Y → Z are both injective, then
g ◦ f : X → Z is injective.
Problem 3.5. Show that if f : X → Y and g : Y → Z are both surjective, then
g ◦ f : X → Z is surjective.
Problem 3.6. Given f : X → 7 Y, define the partial function g : Y →
7 X by: for
any y ∈ Y, if there is a unique x ∈ X such that f ( x ) = y, then g(y) = x;
otherwise g(y) ↑. Show that if f is injective, then g( f ( x )) = x for all x ∈
dom( f ), and f ( g(y)) = y for all y ∈ ran( f ).
Problem 3.7. Suppose f : X → Y and g : Y → Z. Show that the graph of
( g ◦ f ) is R f | R g .

28 Release: (None) ((None))


Chapter 4

The Size of Sets

4.1 Introduction
When Georg Cantor developed set theory in the 1870s, his interest was in part
to make palatable the idea of an infinite collection—an actual infinity, as the
medievals would say. Key to this rehabilitation of the notion of the infinite
was a way to assign sizes—“cardinalities”—to sets. The cardinality of a finite
set is just a natural number, e.g., ∅ has cardinality 0, and a set containing five
things has cardinality 5. But what about infinite sets? Do they all have the
same cardinality, ∞? It turns out, they do not.
The first important idea here is that of an enumeration. We can list every
finite set by listing all its elements. For some infinite sets, we can also list
all their elements if we allow the list itself to be infinite. Such sets are called
enumerable. Cantor’s surprising result was that some infinite sets are not
enumerable.

4.2 Enumerable Sets


One way of specifying a finite set is by listing its elements. But conversely,
since there are only finitely many elements in a set, every finite set can be
enumerated. By this we mean: its elements can be put into a list (a list with
a beginning, where each element of the list other than the first has a unique
predecessor). Some infinite sets can also be enumerated, such as the set of
positive integers.

Definition 4.1 (Enumeration). Informally, an enumeration of a set X is a list


(possibly infinite) of elements of X such that every element of X appears on
the list at some finite position. If X has an enumeration, then X is said to be
enumerable. If X is enumerable and infinite, we say X is denumerable.

A couple of points about enumerations:

29
CHAPTER 4. THE SIZE OF SETS

1. We count as enumerations only lists which have a beginning and in


which every element other than the first has a single element immedi-
ately preceding it. In other words, there are only finitely many elements
between the first element of the list and any other element. In particular,
this means that every element of an enumeration has a finite position:
the first element has position 1, the second position 2, etc.

2. We can have different enumerations of the same set X which differ by


the order in which the elements appear: 4, 1, 25, 16, 9 enumerates the
(set of the) first five square numbers just as well as 1, 4, 9, 16, 25 does.

3. Redundant enumerations are still enumerations: 1, 1, 2, 2, 3, 3, . . . enu-


merates the same set as 1, 2, 3, . . . does.

4. Order and redundancy do matter when we specify an enumeration: we


can enumerate the positive integers beginning with 1, 2, 3, 1, . . . , but the
pattern is easier to see when enumerated in the standard way as 1, 2, 3,
4, . . .

5. Enumerations must have a beginning: . . . , 3, 2, 1 is not an enumeration


of the positive integers because it has no first element. To see how this
follows from the informal definition, ask yourself, “at what position in
the list does the number 76 appear?”

6. The following is not an enumeration of the positive integers: 1, 3, 5, . . . ,


2, 4, 6, . . . The problem is that the even numbers occur at places ∞ + 1,
∞ + 2, ∞ + 3, rather than at finite positions.

7. Lists may be gappy: 2, −, 4, −, 6, −, . . . enumerates the even positive


integers.

8. The empty set is enumerable: it is enumerated by the empty list!

Proposition 4.2. If X has an enumeration, it has an enumeration without gaps or


repetitions.

Proof. Suppose X has an enumeration x1 , x2 , . . . in which each xi is an element


of X or a gap. We can remove repetitions from an enumeration by replacing
repeated elements by gaps. For instance, we can turn the enumeration into
a new one in which xi0 is xi if xi is an element of X that is not among x1 , . . . ,
xi−1 or is − if it is. We can remove gaps by closing up the elements in the list.
To make precise what “closing up” amounts to is a bit difficult to describe.
Roughly, it means that we can generate a new enumeration x100 , x200 , . . . , where
each xi00 is the first element in the enumeration x10 , x20 , . . . after xi00−1 (if there is
one).

30 Release: (None) ((None))


4.2. ENUMERABLE SETS

The last argument shows that in order to get a good handle on enumera-
tions and enumerable sets and to prove things about them, we need a more
precise definition. The following provides it.
Definition 4.3 (Enumeration). An enumeration of a set X is any surjective func-
tion f : Z+ → X.
Let’s convince ourselves that the formal definition and the informal defini-
tion using a possibly gappy, possibly infinite list are equivalent. A surjective
function (partial or total) from Z+ to a set X enumerates X. Such a function
determines an enumeration as defined informally above: the list f (1), f (2),
f (3), . . . . Since f is surjective, every element of X is guaranteed to be the
value of f (n) for some n ∈ Z+ . Hence, every element of X appears at some
finite position in the list. Since the function may not be injective, the list may
be redundant, but that is acceptable (as noted above).
On the other hand, given a list that enumerates all elements of X, we can
define a surjective function f : Z+ → X by letting f (n) be the nth element of
the list that is not a gap, or the final element of the list if there is no nth element.
There is one case in which this does not produce a surjective function: if X
is empty, and hence the list is empty. So, every non-empty list determines
a surjective function f : Z+ → X.
Definition 4.4. A set X is enumerable iff it is empty or has an enumeration.
Example 4.5. A function enumerating the positive integers (Z+ ) is simply the
identity function given by f (n) = n. A function enumerating the natural
numbers N is the function g(n) = n − 1.
Example 4.6. The functions f : Z+ → Z+ and g : Z+ → Z+ given by

f (n) = 2n and
g(n) = 2n + 1

enumerate the even positive integers and the odd positive integers, respec-
tively. However, neither function is an enumeration of Z+ , since neither is
surjective.
( n −1)
Example 4.7. The function f (n) = (−1)n d 2 e (where d x e denotes the ceil-
ing function, which rounds x up to the nearest integer) enumerates the set of
integers Z. Notice how f generates the values of Z by “hopping” back and
forth between positive and negative integers:

f (1) f (2) f (3) f (4) f (5) f (6) f (7) ...

−d 20 e d 12 e −d 22 e d 32 e −d 42 e d 52 e −d 62 e . . .

0 1 −1 2 −2 3 ...

Release: (None) ((None)) 31


CHAPTER 4. THE SIZE OF SETS

You can also think of f as defined by cases as follows:



0
 if n = 1
f (n) = n/2 if n is even

−(n − 1)/2 if n is odd and > 1

That is fine for “easy” sets. What about the set of, say, pairs of positive
integers?
Z+ × Z+ = {hn, mi : n, m ∈ Z+ }
We can organize the pairs of positive integers in an array, such as the follow-
ing:
1 2 3 4 ...
1 h1, 1i h1, 2i h1, 3i h1, 4i . . .
2 h2, 1i h2, 2i h2, 3i h2, 4i . . .
3 h3, 1i h3, 2i h3, 3i h3, 4i . . .
4 h4, 1i h4, 2i h4, 3i h4, 4i . . .
.. .. .. .. .. ..
. . . . . .
Clearly, every ordered pair in Z+ × Z+ will appear exactly once in the
array. In particular, hn, mi will appear in the nth column and mth row. But
how do we organize the elements of such an array into a one-way list? The
pattern in the array below demonstrates one way to do this:

1 2 4 7 ...
3 5 8 ... ...
6 9 ... ... ...
10 ... ... ... ...
.. .. .. .. ..
. . . . .

This pattern is called Cantor’s zig-zag method. Other patterns are perfectly per-
missible, as long as they “zig-zag” through every cell of the array. By Can-
tor’s zig-zag method, the enumeration for Z+ × Z+ according to this scheme
would be:
h1, 1i, h1, 2i, h2, 1i, h1, 3i, h2, 2i, h3, 1i, h1, 4i, h2, 3i, h3, 2i, h4, 1i, . . .
What ought we do about enumerating, say, the set of ordered triples of
positive integers?
Z+ × Z+ × Z+ = {hn, m, ki : n, m, k ∈ Z+ }
We can think of Z+ × Z+ × Z+ as the Cartesian product of Z+ × Z+ and Z+ ,
that is,
(Z+ )3 = (Z+ × Z+ ) × Z+ = {hhn, mi, ki : hn, mi ∈ Z+ × Z+ , k ∈ Z+ }

32 Release: (None) ((None))


4.2. ENUMERABLE SETS

and thus we can enumerate (Z+ )3 with an array by labelling one axis with
the enumeration of Z+ , and the other axis with the enumeration of (Z+ )2 :

1 2 3 4 ...
h1, 1i h1, 1, 1i h1, 1, 2i h1, 1, 3i h1, 1, 4i ...
h1, 2i h1, 2, 1i h1, 2, 2i h1, 2, 3i h1, 2, 4i ...
h2, 1i h2, 1, 1i h2, 1, 2i h2, 1, 3i h2, 1, 4i ...
h1, 3i h1, 3, 1i h1, 3, 2i h1, 3, 3i h1, 3, 4i ...
.. .. .. .. .. ..
. . . . . .

Thus, by using a method like Cantor’s zig-zag method, we may similarly ob-
tain an enumeration of (Z+ )3 .
Cantor’s zig-zag method makes the enumerability of (Z+ )2 (and analo-
gously, (Z+ )3 , etc.) visually evident. Following the zig-zag line in the array
and counting the places, we can tell that h2, 3i is at place 8, but specifying the
inverse g : (Z+ )2 → Z+ of the zig-zag enumeration such that

g(h1, 1i) = 1, g(h1, 2i) = 2, g(h2, 1i) = 3, . . . g(h2, 3i) = 8, . . .

would be helpful. To calculate the position of each pair in the enumeration, we


can use the function below. (The exact derivation of the function is somewhat
messy, so we are skipping it here.)

(n + m − 2)(n + m − 1)
g(n, m) = +n
2
Accordingly, the pair h2, 3i is in position ((2 + 3 − 2)(2 + 3 − 1)/2) + 2 =
(3 · 4/2) + 2 = (12/2) + 2 = 8; pair h3, 7i is in position ((3 + 7 − 2)(3 + 7 −
1)/2) + 3 = 39.
Functions like g above, i.e., inverses of enumerations of sets of pairs, are
called pairing functions.

Definition 4.8 (Pairing function). A function f : X × Y → Z+ is an arithmeti-


cal pairing function if f is total and injective. We also say that f encodes X × Y,
and that for f (h x, yi) = n, n is the code for h x, yi.

The idea is that we can use such functions to encode, e.g., pairs of posi-
tive integers in Z+ , or, in other words, represent pairs of positive integers as
positive integers. Using the inverse of the pairing function, we can decode the
integer, i.e., find out which pair of positive integers is represented.
There are other enumerations of (Z+ )2 that make it easier to figure out
what their inverses are. Here is one. Instead of visualizing the enumeration
in an array, start with the list of positive integers associated with (initially)
empty spaces. Imagine filling these spaces successively with pairs hn, mi as
follow. Starting with the pairs that have 1 in the first place (i.e., pairs h1, mi),
put the first (i.e., h1, 1i) in the first empty place, then skip an empty space, put

Release: (None) ((None)) 33


CHAPTER 4. THE SIZE OF SETS

the second (i.e., h1, 2i) in the next empty place, skip one again, and so forth.
The (incomplete) beginning of our enumeration now looks like this

f (1) f (2) f (3) f (4) f (5) f (6) f (7) f (8) f (9) f (10) ...

h1, 1i h1, 2i h1, 3i h1, 4i h1, 5i ...

Repeat this with pairs h2, mi for the place that still remain empty, again skip-
ping every other empty place:

f (1) f (2) f (3) f (4) f (5) f (6) f (7) f (8) f (9) f (10) ...

h1, 1i h2, 1i h1, 2i h1, 3i h2, 2i h1, 4i h1, 5i h2, 3i ...

Enter pairs h3, mi, h4, mi, etc., in the same way. Our completed enumeration
thus starts like this:
f (1) f (2) f (3) f (4) f (5) f (6) f (7) f (8) f (9) f (10) ...

h1, 1i h2, 1i h1, 2i h3, 1i h1, 3i h2, 2i h1, 4i h4, 1i h1, 5i h2, 3i ...
If we number the cells in the array above according to this enumeration, we
will not find a neat zig-zag line, but this arrangement:

1 2 3 4 5 6 ...
1 1 3 5 7 9 11 ...
2 2 6 10 14 18 ... ...
3 4 12 20 28 ... ... ...
4 8 24 40 ... ... ... ...
5 16 48 ... ... ... ... ...
6 32 ... ... ... ... ... ...
.. .. .. .. .. .. .. ..
. . . . . . . .

We can see that the pairs in the first row are in the odd numbered places
of our enumeration, i.e., pair h1, mi is in place 2m − 1; pairs in the second row,
h1, mi, are in places whose number is the double of an odd number, specifi-
cally, 2 · (2m − 1); pairs in the third row, h1, mi, are in places whose number is
four times an odd number, 4 · (2m − 1); and so on. The factors of (2m − 1) for
each row, 1, 2, 4, 8, . . . , are powers of 2: 20 , 21 , 22 , 23 , . . . In fact, the relevant ex-
ponent is one less than the first member of the pair in question. Thus, for pair
hn, mi the factor is n − 1. This gives us the general formula: 2n−1 · (2m − 1),
and hence:
Example 4.9. The function f : (Z+ )2 → Z+ given by

h(n, m) = 2n−1 (2m − 1)

is a pairing function for the set of pairs of positive integers (Z+ )2 .

34 Release: (None) ((None))


4.3. NON-ENUMERABLE SETS

Accordingly, in our second enumeration of (Z+ )2 , the pair h2, 3i is in po-


sition 22−1 · (2 · 3 − 1) = 2 · 5 = 10; pair h3, 7i is in position 23−1 · (2 · 7 − 1) =
52.
Another common pairing function that encodes (Z+ )2 is the following:

Example 4.10. The function f : (Z+ )2 → Z+ given by

j(n, m) = 2n 3m

is a pairing function for the set of pairs of positive integers (Z+ )2 .

j is injective, but nor surjective. That means the inverse of j is a partial,


surjective function, and hence an enumeration of (Z+ )2 . (Exercise.)

4.3 Non-enumerable Sets


Some sets, such as the set Z+ of positive integers, are infinite. So far we’ve
seen examples of infinite sets which were all enumerable. However, there are
also infinite sets which do not have this property. Such sets are called non-
enumerable.
First of all, it is perhaps already surprising that there are non-enumerable
sets. For any enumerable set X there is a surjective function f : Z+ → X. If a
set is non-enumerable there is no such function. That is, no function mapping
the infinitely many elements of Z+ to X can exhaust all of X. So there are
“more” elements of X than the infinitely many positive integers.
How would one prove that a set is non-enumerable? You have to show
that no such surjective function can exist. Equivalently, you have to show that
the elements of X cannot be enumerated in a one way infinite list. The best
way to do this is to show that every list of elements of X must leave at least
one element out; or that no function f : Z+ → X can be surjective. We can
do this using Cantor’s diagonal method. Given a list of elements of X, say, x1 ,
x2 , . . . , we construct another element of X which, by its construction, cannot
possibly be on that list.
Our first example is the set Bω of all infinite, non-gappy sequences of 0’s
and 1’s.

Theorem 4.11. Bω is non-enumerable.

Proof. Suppose, by way of contradiction, that Bω is enumerable, i.e., suppose


that there is a list s1 , s2 , s3 , s4 , . . . of all elements of Bω . Each of these si is
itself an infinite sequence of 0’s and 1’s. Let’s call the j-th element of the i-th
sequence in this list si ( j). Then the i-th sequence si is

s i (1), s i (2), s i (3), . . .

Release: (None) ((None)) 35


CHAPTER 4. THE SIZE OF SETS

We may arrange this list, and the elements of each sequence si in it, in an
array:
1 2 3 4 ...
1 s 1 ( 1 ) s1 (2) s1 (3) s1 (4) . . .
2 s2 (1) s 2 ( 2 ) s2 (3) s2 (4) . . .
3 s3 (1) s3 (2) s 3 ( 3 ) s3 (4) . . .
4 s4 (1) s4 (2) s4 (3) s 4 ( 4 ) . . .
.. .. .. .. .. ..
. . . . . .
The labels down the side give the number of the sequence in the list s1 , s2 , . . . ;
the numbers across the top label the elements of the individual sequences. For
instance, s1 (1) is a name for whatever number, a 0 or a 1, is the first element
in the sequence s1 , and so on.
Now we construct an infinite sequence, s, of 0’s and 1’s which cannot pos-
sibly be on this list. The definition of s will depend on the list s1 , s2 , . . . .
Any infinite list of infinite sequences of 0’s and 1’s gives rise to an infinite
sequence s which is guaranteed to not appear on the list.
To define s, we specify what all its elements are, i.e., we specify s(n) for all
n ∈ Z+ . We do this by reading down the diagonal of the array above (hence
the name “diagonal method”) and then changing every 1 to a 0 and every 1 to
a 0. More abstractly, we define s(n) to be 0 or 1 according to whether the n-th
element of the diagonal, sn (n), is 1 or 0.
(
1 if sn (n) = 0
s(n) =
0 if sn (n) = 1.

If you like formulas better than definitions by cases, you could also define
s ( n ) = 1 − s n ( n ).
Clearly s is a non-gappy infinite sequence of 0’s and 1’s, since it is just the
mirror sequence to the sequence of 0’s and 1’s that appear on the diagonal of
our array. So s is an element of Bω . But it cannot be on the list s1 , s2 , . . . Why
not?
It can’t be the first sequence in the list, s1 , because it differs from s1 in the
first element. Whatever s1 (1) is, we defined s(1) to be the opposite. It can’t be
the second sequence in the list, because s differs from s2 in the second element:
if s2 (2) is 0, s(2) is 1, and vice versa. And so on.
More precisely: if s were on the list, there would be some k so that s = sk .
Two sequences are identical iff they agree at every place, i.e., for any n, s(n) =
sk (n). So in particular, taking n = k as a special case, s(k) = sk (k) would
have to hold. sk (k) is either 0 or 1. If it is 0 then s(k ) must be 1—that’s how
we defined s. But if sk (k ) = 1 then, again because of the way we defined s,
s(k) = 0. In either case s(k) 6= sk (k ).
We started by assuming that there is a list of elements of Bω , s1 , s2 , . . .
From this list we constructed a sequence s which we proved cannot be on the

36 Release: (None) ((None))


4.3. NON-ENUMERABLE SETS

list. But it definitely is a sequence of 0’s and 1’s if all the si are sequences of
0’s and 1’s, i.e., s ∈ Bω . This shows in particular that there can be no list of
all elements of Bω , since for any such list we could also construct a sequence s
guaranteed to not be on the list, so the assumption that there is a list of all
sequences in Bω leads to a contradiction.

This proof method is called “diagonalization” because it uses the diagonal


of the array to define s. Diagonalization need not involve the presence of an
array: we can show that sets are not enumerable by using a similar idea even
when no array and no actual diagonal is involved.
Theorem 4.12. ℘(Z+ ) is not enumerable.

Proof. We proceed in the same way, by showing that for every list of subsets
of Z+ there is a subset of Z+ which cannot be on the list. Suppose the follow-
ing is a given list of subsets of Z+ :

Z1 , Z2 , Z3 , . . .

We now define a set Z such that for any n ∈ Z+ , n ∈ Z iff n ∈


/ Zn :

Z = { n ∈ Z+ : n ∈
/ Zn }

Z is clearly a set of positive integers, since by assumption each Zn is, and thus
Z ∈ ℘(Z+ ). But Z cannot be on the list. To show this, we’ll establish that for
each k ∈ Z+ , Z 6= Zk .
So let k ∈ Z+ be arbitrary. We’ve defined Z so that for any n ∈ Z+ , n ∈ Z
iff n ∈
/ Zn . In particular, taking n = k, k ∈ Z iff k ∈/ Zk . But this shows that
Z 6= Zk , since k is an element of one but not the other, and so Z and Zk have
different elements. Since k was arbitrary, Z is not on the list Z1 , Z2 , . . .

The preceding proof did not mention a diagonal, but you can think of it
as involving a diagonal if you picture it this way: Imagine the sets Z1 , Z2 , . . . ,
written in an array, where each element j ∈ Zi is listed in the j-th column.
Say the first four sets on that list are {1, 2, 3, . . . }, {2, 4, 6, . . . }, {1, 2, 5}, and
{3, 4, 5, . . . }. Then the array would begin with
Z1 = {1, 2, 3, 4, 5, 6, ...}
Z2 ={ 2, 4, 6, ...}
Z3 = { 1, 2, 5 }
Z4 ={ 3, 4, 5, 6, ...}
.. ..
. .

Then Z is the set obtained by going down the diagonal, leaving out any num-
bers that appear along the diagonal and include those j where the array has a
gap in the j-th row/column. In the above case, we would leave out 1 and 2,
include 3, leave out 4, etc.

Release: (None) ((None)) 37


CHAPTER 4. THE SIZE OF SETS

4.4 Reduction
We showed ℘(Z+ ) to be non-enumerable by a diagonalization argument. We
already had a proof that Bω , the set of all infinite sequences of 0s and 1s,
is non-enumerable. Here’s another way we can prove that ℘(Z+ ) is non-
enumerable: Show that if ℘(Z+ ) is enumerable then Bω is also enumerable. Since
we know Bω is not enumerable, ℘(Z+ ) can’t be either. This is called reducing
one problem to another—in this case, we reduce the problem of enumerating
Bω to the problem of enumerating ℘(Z+ ). A solution to the latter—an enu-
meration of ℘(Z+ )—would yield a solution to the former—an enumeration
of Bω .
How do we reduce the problem of enumerating a set Y to that of enu-
merating a set X? We provide a way of turning an enumeration of X into an
enumeration of Y. The easiest way to do that is to define a surjective function
f : X → Y. If x1 , x2 , . . . enumerates X, then f ( x1 ), f ( x2 ), . . . would enumer-
ate Y. In our case, we are looking for a surjective function f : ℘(Z+ ) → Bω .

Proof of ?? by reduction. Suppose that ℘(Z+ ) were enumerable, and thus that
there is an enumeration of it, Z1 , Z2 , Z3 , . . .
Define the function f : ℘(Z+ ) → Bω by letting f ( Z ) be the sequence sk
such that sk (n) = 1 iff n ∈ Z, and sk (n) = 0 otherwise. This clearly defines
a function, since whenever Z ⊆ Z+ , any n ∈ Z+ either is an element of Z or
isn’t. For instance, the set 2Z+ = {2, 4, 6, . . . } of positive even numbers gets
mapped to the sequence 010101 . . . , the empty set gets mapped to 0000 . . .
and the set Z+ itself to 1111 . . . .
It also is surjective: Every sequence of 0s and 1s corresponds to some set of
positive integers, namely the one which has as its members those integers cor-
responding to the places where the sequence has 1s. More precisely, suppose
s ∈ Bω . Define Z ⊆ Z+ by:

Z = { n ∈ Z+ : s ( n ) = 1 }

Then f ( Z ) = s, as can be verified by consulting the definition of f .


Now consider the list

f ( Z1 ), f ( Z2 ), f ( Z3 ), . . .

Since f is surjective, every member of Bω must appear as a value of f for some


argument, and so must appear on the list. This list must therefore enumerate
all of Bω .
So if ℘(Z+ ) were enumerable, Bω would be enumerable. But Bω is non-
enumerable (??). Hence ℘(Z+ ) is non-enumerable.

It is easy to be confused about the direction the reduction goes in. For
instance, a surjective function g : Bω → X does not establish that X is non-
enumerable. (Consider g : Bω → B defined by g(s) = s(1), the function that

38 Release: (None) ((None))


4.5. EQUINUMEROUS SETS

maps a sequence of 0’s and 1’s to its first element. It is surjective, because
some sequences start with 0 and some start with 1. But B is finite.) Note also
that the function f must be surjective, or otherwise the argument does not go
through: f ( x1 ), f ( x2 ), . . . would then not be guaranteed to include all the
elements of Y. For instance, h : Z+ → Bω defined by

h(n) = 000
| {z. . . 0}
n 0’s

is a function, but Z+ is enumerable.

4.5 Equinumerous Sets


We have an intuitive notion of “size” of sets, which works fine for finite sets.
But what about infinite sets? If we want to come up with a formal way of com-
paring the sizes of two sets of any size, it is a good idea to start with defining
when sets are the same size. Let’s say sets of the same size are equinumerous.
We want the formal notion of equinumerosity to correspond with our intuitive
notion of “same size,” hence the formal notion ought to satisfy the following
properties:

Reflexivity: Every set is equinumerous with itself.

Symmetry: For any sets X and Y, if X is equinumerous with Y, then Y is


equinumerous with X.

Transitivity: For any sets X, Y, and Z, if X is equinumerous with Y and Y is


equinumerous with Z, then X is equinumerous with Z.

In other words, we want equinumerosity to be an equivalence relation.

Definition 4.13. A set X is equinumerous with a set Y, X ≈ Y, if and only if


there is a bijective f : X → Y.

Proposition 4.14. Equinumerosity defines an equivalence relation.

Proof. Let X, Y, and Z be sets.

Reflexivity: Using the identity map 1X : X → X, where 1X ( x ) = x for all


x ∈ X, we see that X is equinumerous with itself (clearly, 1X is bijective).

Symmetry: Suppose that X is equinumerous with Y. Then there is a bijective


f : X → Y. Since f is bijective, its inverse f −1 exists and also bijective.
Hence, f −1 : Y → X is a bijective function from Y to X, so Y is also
equinumerous with X.

Release: (None) ((None)) 39


CHAPTER 4. THE SIZE OF SETS

Transitivity: Suppose that X is equinumerous with Y via the bijective func-


tion f : X → Y and that Y is equinumerous with Z via the bijective func-
tion g : Y → Z. Then the composition of g ◦ f : X → Z is bijective, and
X is thus equinumerous with Z.

Therefore, equinumerosity is an equivalence relation.

Theorem 4.15. Suppose X and Y are equinumerous. Then X is enumerable if and


only if Y is.

Proof. Let X and Y be equinumerous. Suppose that X is enumerable. Then


either X = ∅ or there is a surjective function f : Z+ → X. Since X and Y
are equinumerous, there is a bijective g : X → Y. If X = ∅, then Y = ∅ also
(otherwise there would be an element y ∈ Y but no x ∈ X with g( x ) = y). If,
on the other hand, f : Z+ → X is surjective, then g ◦ f : Z+ → Y is surjective.
To see this, let y ∈ Y. Since g is surjective, there is an x ∈ X such that g( x ) = y.
Since f is surjective, there is an n ∈ Z+ such that f (n) = x. Hence,

( g ◦ f )(n) = g( f (n)) = g( x ) = y

and thus g ◦ f is surjective. We have that g ◦ f is an enumeration of Y, and so


Y is enumerable.

4.6 Comparing Sizes of Sets


Just like we were able to make precise when two sets have the same size in
a way that also accounts for the size of infinite sets, we can also compare the
sizes of sets in a precise way. Our definition of “is smaller than (or equinu-
merous)” will require, instead of a bijection between the sets, a total injective
function from the first set to the second. If such a function exists, the size of the
first set is less than or equal to the size of the second. Intuitively, an injective
function from one set to another guarantees that the range of the function has
at least as many elements as the domain, since no two elements of the domain
map to the same element of the range.

Definition 4.16. X is no larger than Y, X  Y, if and only if there is an injective


function f : X → Y.

Theorem 4.17 (Schröder-Bernstein). Let X and Y be sets. If X  Y and Y  X,


then X ≈ Y.

In other words, if there is a total injective function from X to Y, and if there


is a total injective function from Y back to X, then there is a total bijection
from X to Y. Sometimes, it can be difficult to think of a bijection between two
equinumerous sets, so the Schröder-Bernstein theorem allows us to break the
comparison down into cases so we only have to think of an injection from

40 Release: (None) ((None))


4.6. COMPARING SIZES OF SETS

the first to the second, and vice-versa. The Schröder-Bernstein theorem, apart
from being convenient, justifies the act of discussing the “sizes” of sets, for
it tells us that set cardinalities have the familiar anti-symmetric property that
numbers have.

Definition 4.18. X is smaller than Y, X ≺ Y, if and only if there is an injective


function f : X → Y but no bijective g : X → Y.

Theorem 4.19 (Cantor). For all X, X ≺ ℘( X ).

Proof. The function f : X → ℘( X ) that maps any x ∈ X to its singleton { x } is


injective, since if x 6= y then also f ( x ) = { x } 6= {y} = f (y).
There cannot be a surjective function g : X → ℘( X ), let alone a bijective
one. For suppose that g : X → ℘( X ). Since g is total, every x ∈ X is mapped
to a subset g( x ) ⊆ X. We show that g cannot be surjective. To do this, we
define a subset Y ⊆ X which by definition cannot be in the range of g. Let

Y = {x ∈ X : x ∈
/ g( x )}.

Since g( x ) is defined for all x ∈ X, Y is clearly a well-defined subset of X. But,


it cannot be in the range of g. Let x ∈ X be arbitrary, we show that Y 6= g( x ).
If x ∈ g( x ), then it does not satisfy x ∈
/ g( x ), and so by the definition of Y, we
have x ∈ / Y. If x ∈ Y, it must satisfy the defining property of Y, i.e., x ∈ / g ( x ).
Since x was arbitrary this shows that for each x ∈ X, x ∈ g( x ) iff x ∈ / Y, and
so g( x ) 6= Y. So Y cannot be in the range of g, contradicting the assumption
that g is surjective.

It’s instructive to compare the proof of ?? to that of ??. There we showed


that for any list Z1 , Z2 , . . . , of subsets of Z+ one can construct a set Z of
numbers guaranteed not to be on the list. It was guaranteed not to be on the
list because, for every n ∈ Z+ , n ∈ Zn iff n ∈ / Z. This way, there is always
some number that is an element of one of Zn and Z but not the other. We
follow the same idea here, except the indices n are now elements of X instead
of Z+ . The set Y is defined so that it is different from g( x ) for each x ∈ X,
because x ∈ g( x ) iff x ∈ / Y. Again, there is always an element of X which
is an element of one of g( x ) and Y but not the other. And just as Z therefore
cannot be on the list Z1 , Z2 , . . . , Y cannot be in the range of g.

Problems
Problem 4.1. According to ??, a set X is enumerable iff X = ∅ or there is
a surjective f : Z+ → X. It is also possible to define “enumerable set” pre-
cisely by: a set is enumerable iff there is an injective function g : X → Z+ .
Show that the definitions are equivalent, i.e., show that there is an injective
function g : X → Z+ iff either X = ∅ or there is a surjective f : Z+ → X.

Release: (None) ((None)) 41


CHAPTER 4. THE SIZE OF SETS

Problem 4.2. Define an enumeration of the positive squares 4, 9, 16, . . .

Problem 4.3. Show that if X and Y are enumerable, so is X ∪ Y.

Problem 4.4. Show by induction on n that if X1 , X2 , . . . , Xn are all enumerable,


so is X1 ∪ · · · ∪ Xn .

Problem 4.5. Give an enumeration of the set of all positive rational numbers.
(A positive rational number is one that can be written as a fraction n/m with
n, m ∈ Z+ ).

Problem 4.6. Show that Q is enumerable. (A rational number is one that can
be written as a fraction z/m with z ∈ Z, m ∈ Z+ ).

Problem 4.7. Define an enumeration of B∗ .

Problem 4.8. Recall from your introductory logic course that each possible
truth table expresses a truth function. In other words, the truth functions are
all functions from Bk → B for some k. Prove that the set of all truth functions
is enumerable.

Problem 4.9. Show that the set of all finite subsets of an arbitrary infinite
enumerable set is enumerable.

Problem 4.10. A set of positive integers is said to be cofinite iff it is the com-
plement of a finite set of positive integers. Let I be the set that contains all the
finite and cofinite sets of positive integers. Show that I is enumerable.

Problem 4.11. Show that the enumerable union of enumerable sets is enumer-
able. That is, whenever X1 , X2 , . . . are sets, and each Xi is enumerable, then
the union i∞=1 Xi of all of them is also enumerable.
S

Problem 4.12. Let f : X × Y → Z+ be an arbitrary pairing function. Show


that the inverse of f is an enumeration of X × Y.

Problem 4.13. Specify a function that encodes N3 .

Problem 4.14. Show that ℘(N) is non-enumerable by a diagonal argument.

Problem 4.15. Show that the set of functions f : Z+ → Z+ is non-enumerable


by an explicit diagonal argument. That is, show that if f 1 , f 2 , . . . , is a list of
functions and each f i : Z+ → Z+ , then there is some f : Z+ → Z+ not on this
list.

Problem 4.16. Show that if there is an injective function g : Y → X, and Y is


non-enumerable, then so is X. Do this by showing how you can use g to turn
an enumeration of X into one of Y.

Problem 4.17. Show that the set of all sets of pairs of positive integers is non-
enumerable by a reduction argument.

42 Release: (None) ((None))


4.6. COMPARING SIZES OF SETS

Problem 4.18. Show that Nω , the set of infinite sequences of natural numbers,
is non-enumerable by a reduction argument.

Problem 4.19. Let P be the set of functions from the set of positive integers
to the set {0}, and let Q be the set of partial functions from the set of positive
integers to the set {0}. Show that P is enumerable and Q is not. (Hint: reduce
the problem of enumerating Bω to enumerating Q).

Problem 4.20. Let S be the set of all surjective functions from the set of positive
integers to the set {0,1}, i.e., S consists of all surjective f : Z+ → B. Show that
S is non-enumerable.

Problem 4.21. Show that the set R of all real numbers is non-enumerable.

Problem 4.22. Show that if X is equinumerous with U and and Y is equinu-


merous with V, and the intersections X ∩ Y and U ∩ V are empty, then the
unions X ∪ Y and U ∪ V are equinumerous.

Problem 4.23. Show that if X is infinite and enumerable, then it is equinumer-


ous with the positive integers Z+ .

Problem 4.24. Show that there cannot be an injective function g : ℘( X ) → X,


for any set X. Hint: Suppose g : ℘( X ) → X is injective. Then for each x ∈ X
there is at most one Y ⊆ X such that g(Y ) = x. Define a set Y such that for
every x ∈ X, g(Y ) 6= x.

Release: (None) ((None)) 43


Part II

Propositional Logic

44
4.6. COMPARING SIZES OF SETS

This part contains material on classical propositional logic. The first


chapter is relatively rudimenatry and just lists definitions and results,
many proofs are not carried out but are left as exercises. The material
on proof systems and the completeness theorem is included from the part
on first-order logic, with the “FOL” tag set to false. This leaves out ev-
erything related to predicates, terms, and quantifiers, and replaces talk of
structures M with talk about valuations v.
It is planned to expand this part to include more detail, and to add
further topics and results, such as truth-functional completeness.

Release: (None) ((None)) 45


Chapter 5

Syntax and Semantics

This is a very quick summary of definitions only. It should be ex-


panded to provide a gentle intro to proofs by induction on formulas, with
lots more examples.

5.1 Introduction
Propositional logic deals with formulas that are built from propositional vari-
ables using the propositional connectives ¬, ∧, ∨, →, and ↔. Intuitively,
a propositional variable p stands for a sentence or proposition that is be true
or false. Whenever the “truth value” of the propositional variable in a formula
are determined, so is the truth value of any formulas formed from them using
propositional connectives. We say that propositional logic is truth functional,
because its semantics is given by functions of truth values. In particular, in
propositional logic we leave out of consideration any further determination
of truth and falsity, e.g., whether something is necessarily true rather than
just contingently true, or whether something is known to be true, or whether
something is true now rather than was true or will be true. We only consider
two truth values true (T) and false (F), and so exclude from discussion the
possibility that a statement may be neither true nor false, or only half true. We
also concentrate only on connectives where the truth value of a formula built
from them is completely determined by the truth values of its parts (and not,
say, on its meaning). In particular, whether the truth value of conditionals in
English is truth functional in this sense is contentious. The material condi-
tional → is; other logics deal with conditionals that are not truth functional.
In order to develop the theory and metatheory of truth-functional propo-
sitional logic, we must first define the syntax and semantics of its expressions.
We will describe one way of constructing formulas from propositional vari-
ables using the connectives. Alternative definitions are possible. Other sys-

46
5.2. PROPOSITIONAL FORMULAS

tems will chose different symbols, will select different sets of connectives as
primitive, will use parentheses differently (or even not at all, as in the case of
so-called Polish notation). What all approaches have in common, though, is
that the formation rules define the set of formulas inductively. If done prop-
erly, every expression can result essentially in only one way according to the
formation rules. The inductive definition resulting in expressions that are
uniquely readable means we can give meanings to these expressions using the
same method—inductive definition.
Giving the meaning of expressions is the domain of semantics. The central
concept in semantics for propositonal logic is that of satisfaction in a valua-
tion. A valuation v assigns truth values T, F to the propositional variables.
Any valuation determines a truth value v( ϕ) for any formula ϕ. A formula is
satisfied in a valuation v iff v( ϕ) = T—we write this as v  ϕ. This relation
can also be defined by induction on the structure of ϕ, using the truth func-
tions for the logical connectives to define, say, satisfaction of ϕ ∧ ψ in terms of
satisfaction (or not) of ϕ and ψ.
On the basis of the satisfaction relation v  ϕ for sentences we can then
define the basic semantic notions of tautology, entailment, and satisfiability.
A formula is a tautology,  ϕ, if every valuation satisfies it, i.e., v( ϕ) = T for
any v. It is entailed by a set of formulas, Γ  ϕ, if every valuation that satisfies
all the formulas in Γ also satisfies ϕ. And a set of formulas is satisfiable if
some valuation satisfies all formulas in it at the same time. Because formulas
are inductively defined, and satisfaction is in turn defined by induction on
the structure of formulas, we can use induction to prove properties of our
semantics and to relate the semantic notions defined.

5.2 Propositional Formulas


Formulas of propositional logic are built up from propositional variables and the
propositional constant ⊥ using logical connectives.

1. A denumerable set At0 of propositional variables p0 , p1 , . . .

2. The propositional constant for falsity ⊥.

3. The logical connectives: ¬ (negation), ∧ (conjunction), ∨ (disjunction),


→ (conditional)

4. Punctuation marks: (, ), and the comma.

In addition to the primitive connectives introduced above, we also use the


following defined symbols: ↔ (biconditional), truth >
A defined symbol is not officially part of the language, but is introduced
as an informal abbreviation: it allows us to abbreviate formulas which would,

Release: (None) ((None)) 47


CHAPTER 5. SYNTAX AND SEMANTICS

if we only used primitive symbols, get quite long. This is obviously an ad-
vantage. The bigger advantage, however, is that proofs become shorter. If a
symbol is primitive, it has to be treated separately in proofs. The more primi-
tive symbols, therefore, the longer our proofs.
You may be familiar with different terminology and symbols than the ones
we use above. Logic texts (and teachers) commonly use either ∼, ¬, and ! for
“negation”, ∧, ·, and & for “conjunction”. Commonly used symbols for the
“conditional” or “implication” are →, ⇒, and ⊃. Symbols for “biconditional,”
“bi-implication,” or “(material) equivalence” are ↔, ⇔, and ≡. The ⊥ sym-
bol is variously called “falsity,” “falsum,”, “absurdity,”, or “bottom.” The >
symbol is variously called “truth,” “verum,”, or “top.”
Definition 5.1 (Formula). The set Frm(L0 ) of formulas of propositional logic
is defined inductively as follows:
1. ⊥ is an atomic formula.

2. Every propositional variable pi is an atomic formula.

3. If ϕ is a formula, then ¬ ϕ is formula.

4. If ϕ and ψ are formulas, then ( ϕ ∧ ψ) is a formula.

5. If ϕ and ψ are formulas, then ( ϕ ∨ ψ) is a formula.

6. If ϕ and ψ are formulas, then ( ϕ → ψ) is a formula.

7. If ϕ is a formula and x is a variable, then ∀ x ϕ is a formula.

8. If ϕ is a formula and x is a variable, then ∃ x ϕ is a formula.

9. Nothing else is a formula.


The definitions of the set of terms and that of formulas are inductive defini-
tions. Essentially, we construct the set of formulas in infinitely many stages. In
the initial stage, we pronounce all atomic formulas to be formulas; this corre-
sponds to the first few cases of the definition, i.e., the cases for ⊥, pi . “Atomic
formula” thus means any formula of this form.
The other cases of the definition give rules for constructing new formulas
out of formulas already constructed. At the second stage, we can use them to
construct formulas out of atomic formulas. At the third stage, we construct
new formulas from the atomic formulas and those obtained in the second
stage, and so on. A formula is anything that is eventually constructed at such
a stage, and nothing else.
Definition 5.2. Formulas constructed using the defined operators are to be
understood as follows:

1. > abbreviates ¬⊥.

48 Release: (None) ((None))


5.3. PRELIMINARIES

2. ϕ ↔ ψ abbreviates ( ϕ → ψ) ∧ (ψ → ϕ).

Definition 5.3 (Syntactic identity). The symbol ≡ expresses syntactic identity


between strings of symbols, i.e., ϕ ≡ ψ iff ϕ and ψ are strings of symbols of
the same length and which contain the same symbol in each place.

The ≡ symbol may be flanked by strings obtained by concatenation, e.g.,


ϕ ≡ (ψ ∨ χ) means: the string of symbols ϕ is the same string as the one
obtained by concatenating an opening parenthesis, the string ψ, the ∨ symbol,
the string χ, and a closing parenthesis, in this order. If this is the case, then we
know that the first symbol of ϕ is an opening parenthesis, ϕ contains ψ as a
substring (starting at the second symbol), that substring is followed by ∨, etc.

5.3 Preliminaries
Theorem 5.4. Principle of induction on formulas: If some property P holds of all
the atomic formulas and is such that

1. it holds for ¬ ϕ whenever it holds for ϕ;

2. if holds for and ( ϕ ∧ ψ) whenever it holds for ϕ and ψ;

3. if holds for and ( ϕ ∨ ψ) whenever it holds for ϕ and ψ;

4. if holds for and ( ϕ → ψ) whenever it holds for ϕ and ψ;

then P holds of all formulas.

Proof. Let S be the collection of all formulas with property P. Clearly S ⊆


Frm(L0 ). S satisfies all the conditions of ??: it contains all atomic formulas
and is closed under the logical operators. Frm(L0 ) is the smallest such class,
so Frm ⊆ S. So Frm = S, and every formula has propery P.

Proposition 5.5. Any formula in Frm(L0 ) is balanced, in that it has as many left
parentheses as right ones.

Proposition 5.6. No proper initial segment of a formula is a formula.

Proposition 5.7 (Unique Readability). Any formula ϕ in Frm(L0 ) has exactly


one parsing as one of the following

1. ⊥.

2. pn for some pn ∈ At0 .

3. ¬ψ for some ψ in Frm(L0 ).

4. (ψ ∧ χ) for some formulas ψ and χ.

Release: (None) ((None)) 49


CHAPTER 5. SYNTAX AND SEMANTICS

5. (ψ ∨ χ) for some formulas ψ and χ.

6. (ψ → χ) for some formulas ψ and χ.

Moreover, such parsing is unique.

Proof. By induction on ϕ. For instance, suppose that ϕ has two distinct read-
ings as (ψ → χ) and (ψ0 → χ0 ). Then ψ and ψ0 must be the same (or else one
would be a proper initial segment of the other); so if the two readings of ϕ are
distinct it must be because χ and χ0 are distinct readings of the same sequence
of symbols, which is impossible by the inductive hypothesis.

Definition 5.8 (Uniform Substitution). If ϕ and ψ are formulas, and pi is a


propositional variable, then ϕ[ψ/pi ] denotes the result of replacing each oc-
currence of pi by an occurrence of ψ in ϕ; similarly, the simultaneous substi-
tution of p1 , . . . , pn by formulas ψ1 , . . . , Bn is denoted by ϕ[ψ1 /p1 , . . . , ψn /pn ].

5.4 Valuations and Satisfaction


Definition 5.9 (Valuations). Let {T, F} be the set of the two truth values,
“true” and “false.” A valuation for L0 is a function v assigning either T or
F to the propositional variables of the language, i.e., v : At0 → {T, F}.

Definition 5.10. Given a valuation v, define the evaluation function v( : )Frm(L0 ) →


{T, F} inductively by:

v(⊥) = F;
v (pn ) = v (pn );
(
T if v( ϕ) = F;
v(¬ ϕ) =
F otherwise.
(
T if v( ϕ) = T and v(ψ) = T;
v( ϕ ∧ ψ ) =
F if v( ϕ) = F or v(ψ) = F.
(
T if v( ϕ) = T or v(ψ) = T;
v( ϕ ∨ ψ ) =
F if v( ϕ) = F and v(ψ) = F.
(
T if v( ϕ) = F or v(ψ) = T;
v( ϕ → ψ ) =
F if v( ϕ) = T and v(ψ) = F.

The valuation clauses correspond to the following truth tables:

50 Release: (None) ((None))


5.5. SEMANTIC NOTIONS

ϕ ψ ϕ∧ψ ϕ∨ψ ϕ→ψ ϕ↔ψ


T T T T T T
T F F T F F
F T F T T F
F F F F T T

Theorem 5.11 (Local Determination). Suppose that v1 and v2 are valuations that
agree on the propositional letters occurring in ϕ, i.e., v1 (pn ) = v2 (pn ) whenever pn
occurs in ϕ. Then they also agree on any ϕ, i.e., v1 ( ϕ) = v2 ( ϕ).

Proof. By induction on ϕ.

Definition 5.12 (Satisfaction). Using the evaluation function, we can define


the notion of satisfaction of a formula ϕ by a valuation v, v  ϕ, inductively as
follows. (We write v 2 ϕ to mean “not v  ϕ.”)

1. ϕ ≡ ⊥: v 2 ϕ.

2. ϕ ≡ pi : M  ϕ iff v(pi ) = T.

3. ϕ ≡ ¬ψ: v  ϕ iff v 2 ψ.

4. ϕ ≡ (ψ ∧ χ): v  ϕ iff v  ψ and v  χ.

5. ϕ ≡ (ψ ∨ χ): v  ϕ iff v  ϕ or v  ψ (or both).

6. ϕ ≡ (ψ → χ): v  ϕ iff v 2 ψ or v  χ (or both).

If Γ is a set of formulas, v  Γ iff v  ϕ for every ϕ ∈ Γ.

Proposition 5.13. v  ϕ iff v( ϕ) = T.

Proof. By induction on ϕ.

5.5 Semantic Notions


We define the following semantic notions:

Definition 5.14. 1. A formula ϕ is satisfiable if for some v, v  ϕ; it is unsat-


isfiable if for no v, v  ϕ;

2. A formula ϕ is a tautology if v  ϕ for all valuations v;

3. A formula ϕ is contingent if it is satisfiable but not a tautology;

4. If Γ is a set of formulas, Γ  ϕ (“Γ entails ϕ”) if and only if v  ϕ for


every valuation v for which v  Γ.

5. If Γ is a set of formulas, Γ is satisfiable if there is a valuation v for which


v  Γ, and Γ is unsatisfiable otherwise.

Release: (None) ((None)) 51


CHAPTER 5. SYNTAX AND SEMANTICS

Proposition 5.15. 1. ϕ is a tautology if and only if ∅  ϕ;

2. If Γ  ϕ and Γ  ϕ → ψ then Γ  ψ;

3. If Γ is satisfiable then every finite subset of Γ is also satisfiable;

4. Monotony: if Γ ⊆ ∆ and Γ  ϕ then also ∆  ϕ;

5. Transitivity: if Γ  ϕ and ∆ ∪ { ϕ}  ψ then Γ ∪ ∆  ψ;

Proof. Exercise.

Proposition 5.16. Γ  ϕ if and only if Γ ∪ {¬ ϕ} is unsatisfiable;

Proof. Exercise.

Theorem 5.17 (Semantic Deduction Theorem). Γ  ϕ → ψ if and only if Γ ∪


{ ϕ}  ψ.

Proof. Exercise.

Problems
Problem 5.1. Prove ??

Problem 5.2. Prove ??

Problem 5.3. Give a mathematically rigorous definition of ϕ[ψ/p] by induc-


tion.

Problem 5.4. Prove ??

Problem 5.5. Prove ??

Problem 5.6. Prove ??

Problem 5.7. Prove ??

52 Release: (None) ((None))


Chapter 6

Derivation Systems

This chapter collects general material on derivation systems. A text-


book using a specific system can insert the introduction section plus the
relevant survey section at the beginning of the chapter introducing that
system.

6.1 Introduction
Logics commonly have both a semantics and a derivation system. The seman-
tics concerns concepts such as truth, satisfiability, validity, and entailment.
The purpose of derivation systems is to provide a purely syntactic method
of establishing entailment and validity. They are purely syntactic in the sense
that a derivation in such a system is a finite syntactic object, usually a sequence
(or other finite arrangement) of sentences or formulas. Good derivation sys-
tems have the property that any given sequence or arrangement of sentences
or formulas can be verified mechanically to be “correct.”
The simplest (and historically first) derivation systems for first-order logic
were axiomatic. A sequence of formulas counts as a derivation in such a sys-
tem if each individual formula in it is either among a fixed set of “axioms”
or follows from formulas coming before it in the sequence by one of a fixed
number of “inference rules”—and it can be mechanically verified if a formula
is an axiom and whether it follows correctly from other formulas by one of
the inference rules. Axiomatic proof systems are easy to describe—and also
easy to handle meta-theoretically—but derivations in them are hard to read
and understand, and are also hard to produce.
Other derivation systems have been developed with the aim of making it
easier to construct derivations or easier to understand derivations once they
are complete. Examples are natural deduction, truth trees, also known as
tableaux proofs, and the sequent calculus. Some derivation systems are de-

53
CHAPTER 6. DERIVATION SYSTEMS

signed especially with mechanization in mind, e.g., the resolution method is


easy to implement in software (but its derivations are essentially impossible to
understand). Most of these other proof systems represent derivations as trees
of formulas rather than sequences. This makes it easier to see which parts of
a derivation depend on which other parts.
So for a given logic, such as first-order logic, the different derivation sys-
tems will give different explications of what it is for a sentence to be a theorem
and what it means for a sentence to be derivable from some others. However
that is done (via axiomatic derivations, natural deductions, sequent deriva-
tions, truth trees, resolution refutations), we want these relations to match the
semantic notions of validity and entailment. Let’s write ` ϕ for “ϕ is a the-
orem” and “Γ ` ϕ” for “ϕ is derivable from Γ.” However ` is defined, we
want it to match up with , that is:

1. ` ϕ if and only if  ϕ

2. Γ ` ϕ if and only if Γ  ϕ

The “only if” direction of the above is called soundness. A derivation system is
sound if derivability guarantees entailment (or validity). Every decent deriva-
tion system has to be sound; unsound derivation systems are not useful at all.
After all, the entire purpose of a derivation is to provide a syntactic guarantee
of validity or entailment. We’ll prove soundness for the derivation systems
we present.
The converse “if” direction is also important: it is called completeness. A
complete derivation system is strong enough to show that ϕ is a theorem
whenever ϕ is valid, and that there Γ ` ϕ whenever Γ  ϕ. Completeness
is harder to establish, and some logics have no complete derivation systems.
First-order logic does. Kurt Gödel was the first one to prove completeness for
a derivation system of first-order logic in his 1929 dissertation.
Another concept that is connected to derivation systems is that of consis-
tency. A set of sentences is called inconsistent if anything whatsoever can be
derived from it, and consistent otherwise. Inconsistency is the syntactic coun-
terpart to unsatisfiablity: like unsatisfiable sets, inconsistent sets of sentences
do not make good theories, they are defective in a fundamental way. Con-
sistent sets of sentences may not be true or useful, but at least they pass that
minimal threshold of logical usefulness. For different derivation systems the
specific definition of consistency of sets of sentences might differ, but like `,
we want consistency to coincide with its semantic counterpart, satisfiability.
We want it to always be the case that Γ is consistent if and only if it is satis-
fiable. Here, the “if” direction amounts to completeness (consistency guaran-
tees satisfiability), and the “only if” direction amounts to soundness (satisfi-
ability guarantees consistency). In fact, for classical first-order logic, the two
versions of soundness and completeness are equivalent.

54 Release: (None) ((None))


6.2. THE SEQUENT CALCULUS

6.2 The Sequent Calculus


While many derivation systems operate with arrangements of sentences, the
sequent calculus operates with sequents. A sequent is an expression of the
form
ϕ1 , . . . , ϕm ⇒ ψ1 , . . . , ψm ,
that is a pair of sequences of sentences, separated by the sequent symbol ⇒.
Either sequence may be empty. A derivation in the sequent calculus is a tree
of sequents, where the topmost sequents are of a special form (they are called
“initial sequents” or “axioms”) and every other sequent follows from the se-
quents immediately above it by one of the rules of inference. The rules of
inference either manipulate the sentences in the sequents (adding, removing,
or rearranging them on either the left or the right), or they introduce a com-
plex formula in the conclusion of the rule. For instance, the ∧L rule allows the
inference from ϕ, Γ ⇒ ∆ to A ∧ ψ, Γ ⇒ ∆, and the →R allows the inference
from ϕ, Γ ⇒ ∆, ψ to Γ ⇒ ∆, ϕ → ψ, for any Γ, ∆, ϕ, and ψ. (In particular, Γ
and ∆ may be empty.)
The ` relation based on the sequent calculus is defined as follows: Γ ` ϕ
iff there is some sequence Γ0 such that every ϕ in Γ0 is in Γ and there is a
derivation with the sequent Γ0 ⇒ ϕ at its root. ϕ is a theorem in the sequent
calculus if the sequent ⇒ ϕ has a derivation. For instance, here is a derivation
that shows that ` ( ϕ ∧ ψ) → ϕ:

ϕ ⇒ ϕ
ϕ∧ψ ⇒ ϕ
∧L
→R
⇒ ( ϕ ∧ ψ) → ϕ

A set Γ is inconsistent in the sequent calculus if there is a derivation of


Γ0 ⇒ (where every ϕ ∈ Γ0 is in Γ and the right side of the sequent is empty).
Using the rule WR, any sentence can be derived from an inconsistent set.
The sequent calculus was invented in the 1930s by Gerhard Gentzen. Be-
cause of its systematic and symmetric design, it is a very useful formalism for
developing a theory of derivations. It is relatively easy to find derivations in
the sequent calculus, but these derivations are often hard to read and their
connection to proofs are sometimes not easy to see. It has proved to be a very
elegant approach to derivation systems, however, and many logics have se-
quent calculus systems.

6.3 Natural Deduction


Natural deduction is a derivation system intended to mirror actual reasoning
(especially the kind of regimented reasoning employed by mathematicians).
Actual reasoning proceeds by a number of “natural” patterns. For instance,

Release: (None) ((None)) 55


CHAPTER 6. DERIVATION SYSTEMS

proof by cases allows us to establish a conclusion on the basis of a disjunc-


tive premise, by establishing that the conclusion follows from either of the
disjuncts. Indirect proof allows us to establish a conclusion by showing that
its negation leads to a contradiction. Conditional proof establishes a condi-
tional claim “if . . . then . . . ” by showing that the consequent follows from
the antecedent. Natural deduction is a formalization of some of these nat-
ural inferences. Each of the logical connectives and quantifiers comes with
two rules, an introduction and an elimination rule, and they each correspond
to one such natural inference pattern. For instance, →Intro corresponds to
conditional proof, and ∨Elim to proof by cases. A particularly simple rule is
∧Elim which allows the inference from ϕ ∧ ψ to ϕ (or ψ).
One feature that distinguishes natural deduction from other derivation
systems is its use of assumptions. A derivation in natural deduction is a tree
of formulas. A single formula stands at the root of the tree of formulas, and
the “leaves” of the tree are formulas from which the conclusion is derived.
In natural deduction, some leaf formulas play a role inside the derivation but
are “used up” by the time the derivation reaches the conclusion. This corre-
sponds to the practice, in actual reasoning, of introducing hypotheses which
only remain in effect for a short while. For instance, in a proof by cases, we
assume the truth of each of the disjuncts; in conditional proof, we assume the
truth of the antecedent; in indirect proof, we assume the truth of the nega-
tion of the conclusion. This way of introducing hypothetical assumptions
and then doing away with them in the service of establishing an intermedi-
ate step is a hallmark of natural deduction. The formulas at the leaves of a
natural deduction derivation are called assumptions, and some of the rules of
inference may “discharge” them. For instance, if we have a derivation of ψ
from some assumptions which include ϕ, then the →Intro rule allows us to
infer ϕ → ψ and discharge any assumption of the form ϕ. (To keep track of
which assumptions are discharged at which inferences, we label the inference
and the assumptions it discharges with a number.) The assumptions that re-
main undischarged at the end of the derivation are together sufficient for the
truth of the conclusion, and so a derivation establishes that its undischarged
assumptions entail its conclusion.
The relation Γ ` ϕ based on natural deduction holds iff there is a deriva-
tion in which ϕ is the last sentence in the tree, and every leaf which is undis-
charged is in Γ. ϕ is a theorem in natural deduction iff there is a derivation in
which ϕ is the last sentence and all assumptions are discharged. For instance,
here is a derivation that shows that ` ( ϕ ∧ ψ) → ϕ:

[ ϕ ∧ ψ ]1
ϕ ∧Elim
1 →Intro
( ϕ ∧ ψ) → ϕ

56 Release: (None) ((None))


6.4. TABLEAUX

The label 1 indicates that the assumption ϕ ∧ ψ is discharged at the →Intro


inference.
A set Γ is inconsistent iff Γ ` ⊥ in natural deduction. The rule ⊥ I makes
it so that from an inconsistent set, any sentence can be derived.
Natural deduction systems were developed by Gerhard Gentzen and Sta-
nisław Jaśkowski in the 1930s, and later developed by Dag Prawitz and Fred-
eric Fitch. Because its inferences mirror natural methods of proof, it is favored
by philosophers. The versions developed by Fitch are often used in introduc-
tory logic textbooks. In the philosophy of logic, the rules of natural deduc-
tion have sometimes been taken to give the meanings of the logical operators
(“proof-theoretic semantics”).

6.4 Tableaux

While many derivation systems operate with arrangements of sentences, tableaux


operate with signed formulas. A signed formula is a pair consisting of a truth
value sign (T or F) and a sentence

T ϕ or F ϕ.

A tableau consists of signed formulas arranged in a downward-branching


tree. It begins with a number of assumptions and continues with signed for-
mulas which result from one of the signed formulas above it by applying one
of the rules of inference. Each rule allows us to add one or more signed formu-
las to the end of a branch, or two signed formulas side by side—in this case a
branch splits into two, with the two added signed formulas forming the ends
of the two branches.
A rule applied to a complex signed formula results in the addition of
signed formulas which are immediate sub-formulas. They come in pairs, one
rule for each of the two signs. For instance, the ∧T rule applies to T ϕ ∧ ψ,
and allows the addition of both the two signed formulas T ϕ and Tψ to the
end of any branch containing T ϕ ∧ ψ, and the rule ϕ ∧ ψF allows a branch to
be split by adding F ϕ and F ψ side-by-side. A tableau is closed if every one
of its branches contains a matching pair of signed formulas T ϕ and F ϕ.
The ` relation based on tableaux is defined as follows: Γ ` ϕ iff there is
some finite set Γ0 = {ψ1 , . . . , ψn } ⊆ Γ such that there is a closed tableau for
the assumptions

{F ϕ, Tψ1 , . . . , Tψn }

For instance, here is a closed tableau that shows that ` ( ϕ ∧ ψ) → ϕ:

Release: (None) ((None)) 57


CHAPTER 6. DERIVATION SYSTEMS

1. F ( ϕ ∧ ψ) → ϕ Assumption
2. Tϕ ∧ ψ →F 1
3. Fϕ →F 1
4. Tϕ →T 2
5. Tψ →T 2

A set Γ is inconsistent in the tableau calculus if there is a closed tableau for


assumptions
{Tψ1 , . . . , Tψn }
for some ψi ∈ Γ.
The sequent calculus was invented in the 1950s independently by Evert
Beth and Jaakko Hintikka, and simplified and popularized by Raymond Smullyan.
It is very easy to use, since constructing a tableau is a very systematic proce-
dure. Because of the systematic nature of tableaux, they also lend themselves
to implementation by computer. However, tableau is often hard to read and
their connection to proofs are sometimes not easy to see. The approach is also
quite general, and many different logics have tableau systems. Tableaux also
help us to find structures that satisfy given (sets of) sentences: if the set is
satisfiable, it won’t have a closed tableau, i.e., any tableau will have an open
branch. The satisfying structure can be “read off” an open branch, provided
all rules it is possible to apply have been applied on that branch. There is also
a very close connection to the sequent calculus: essentially, a closed tableau is
a condensed derivation in the sequent calculus, written upside-down.

6.5 Axiomatic Derivations


Axiomatic derivations are the oldest and simplest logical derivation systems.
Its derivations are simply sequences of sentences. A sequence of sentences
conunts as a correct derivation if every sentence ϕ in it satisfies one of the
following conditions:

1. ϕ is an axiom, or

2. ϕ is an element of a given set Γ of sentences, or

3. ϕ is justified by a rule of inference.

To be an axiom, ϕ has to have the form of on of a number of fixed sentence


schemas. There are many sets of axiom schemas that provide a satisfactory
(sound and complete) derivation system for first-order logic. Some are orga-
nized according to the connectives they govern, e.g., the schemas

ϕ → (ψ → ϕ) ψ → (ψ ∨ χ) (ψ ∧ χ) → ψ

58 Release: (None) ((None))


6.5. AXIOMATIC DERIVATIONS

are common axioms that govern →, ∨ and ∧. Some axiom systems aim at a
minimal number of axioms. Depending on the connectives that are taken as
primitives, it is even possible to find axiom systems that consist of a single
axiom.
A rule of inference is a conditional statement that gives a sufficient condi-
tion for a sentence in a derivation to be justified. Modus ponens is one very
common such rule: it says that if ϕ and ϕ → ψ are already justified, then ψ is
justified. This means that a line in a derivation containing the sentence ψ is
justified, provided that both ϕ and ϕ → ψ (for some sentence ϕ) appear in the
derivation before ψ.
The ` relation based on axiomatic derivations is defined as follows: Γ ` ϕ
iff there is a derivation with the sentence ϕ as its last formula (and Γ is taken
as the set of sentences in that derivation which are justified by (2) above). ϕ
is a theorem if ϕ has a derivation where Γ is empty, i.e., every sentence in the
derivation is justfied either by (1) or (3). For instance, here is a derivation that
shows that ` ϕ → (ψ → (ψ ∨ ϕ)):

1. ψ → (ψ ∨ ϕ)
2. (ψ → (ψ ∨ ϕ)) → ( ϕ → (ψ → (ψ ∨ ϕ)))
3. ϕ → (ψ → (ψ ∨ ϕ))

The sentence on line 1 is of the form of the axiom ϕ → ( ϕ ∨ ψ) (with the roles
of ϕ and ψ reversed). The sentence on line 2 is of the form of the axiom ϕ →
(ψ → ϕ). Thus, both lines are justified. Line 3 is justified by modus ponens: if
we abbreviate it as θ, then line 2 has the form χ → θ, where χ is ψ → (ψ ∨ ϕ),
i.e., line 1.
A set Γ is inconsistent if Γ ` ⊥. A complete axiom system will also prove
that ⊥ → ϕ for any ϕ, and so if Γ is inconsistent, then Γ ` ϕ for any ϕ.
Systems of axiomatic derivations for logic were first given by Gottlob Frege
in his 1879 Begriffsschrift, which for this reason is often considered the first
work of modern logic. They were perfected in Alfred North Whitehead and
Bertrand Russell’s Principia Mathematica and by David Hilbert and his stu-
dents in the 1920s. They are thus often called “Frege systems” or “Hilbert
systems.” They are very versatile in that it is often easy to find an axiomatic
system for a logic. Because derivations have a very simple structure and only
one or two inference rules, it is also relatively easy to prove things about them.
However, they are very hard to use in practice, i.e., it is difficult to find and
write proofs.

Release: (None) ((None)) 59


Chapter 7

The Sequent Calculus

This chapter presents Gentzen’s standard sequent calculus LK for clas-


sical first-order logic. It could use more examples and exercises. To in-
clude or exclude material relevant to the sequent calculus as a proof sys-
tem, use the “prfLK” tag.

7.1 Rules and Derivations


For the following, let Γ, ∆, Π, Λ represent finite sequences of sentences.

Definition 7.1 (Sequent). A sequent is an expression of the form

Γ⇒∆

where Γ and ∆ are finite (possibly empty) sequences of sentences of the lan-
guage L. Γ is called the antecedent, while ∆ is the succedent.

The intuitive idea behind a sequent is: if all of the sentences in the an-
tecedent hold, then at least one of the sentences in the succedent holds. That
is, if Γ = h ϕ1 , . . . , ϕm i and ∆ = hψ1 , . . . , ψn i, then Γ ⇒ ∆ holds iff

( ϕ1 ∧ · · · ∧ ϕm ) → (ψ1 ∨ · · · ∨ ψn )

holds. There are two special cases: where Γ is empty and when ∆ is empty.
When Γ is empty, i.e., m = 0, ⇒ ∆ holds iff ψ1 ∨ · · · ∨ ψn holds. When ∆ is
empty, i.e., n = 0, Γ ⇒ holds iff ¬( ϕ1 ∧ · · · ∧ ϕm ) does. We say a sequent is
valid iff the corresponding sentence is valid.
If Γ is a sequence of sentences, we write Γ, ϕ for the result of appending
ϕ to the right end of Γ (and ϕ, Γ for the result of appending ϕ to the left end
of Γ). If ∆ is a sequence of sentences also, then Γ, ∆ is the concatenation of the
two sequences.

60
7.2. PROPOSITIONAL RULES

Definition 7.2 (Initial Sequent). An initial sequent is a sequent of one of the


following forms:

1. ϕ ⇒ ϕ

2. ⊥ ⇒

for any sentence ϕ in the language.

Derivations in the sequent calculus are certain trees of sequents, where


the topmost sequents are initial sequents, and if a sequent stands below one
or two other sequents, it must follow correctly by a rule of inference. The
rules for LK are divided into two main types: logical rules and structural rules.
The logical rules are named for the main operator of the sentence containing
ϕ and/or ψ in the lower sequent. Each one comes in two versions, one for
inferring a sequent with the sentence containg the logical operator on the left,
and one with the sentence on the right.

7.2 Propositional Rules

Rules for ¬

Γ ⇒ ∆, ϕ ϕ, Γ ⇒ ∆
¬L ¬R
¬ ϕ, Γ ⇒ ∆ Γ ⇒ ∆, ¬ ϕ

Rules for ∧

ϕ, Γ ⇒ ∆
∧L
ϕ ∧ ψ, Γ ⇒ ∆ Γ ⇒ ∆, ϕ Γ ⇒ ∆, ψ
∧R
ψ, Γ ⇒ ∆ Γ ⇒ ∆, ϕ ∧ ψ
∧L
ϕ ∧ ψ, Γ ⇒ ∆

Rules for ∨

Γ ⇒ ∆, ϕ
∨R
ϕ, Γ ⇒ ∆ ψ, Γ ⇒ ∆ Γ ⇒ ∆, ϕ ∨ ψ
∨L
ϕ ∨ ψ, Γ ⇒ ∆ Γ ⇒ ∆, ψ
∨R
Γ ⇒ ∆, ϕ ∨ ψ

Release: (None) ((None)) 61


CHAPTER 7. THE SEQUENT CALCULUS

Rules for →

Γ ⇒ ∆, ϕ ψ, Π ⇒ Λ ϕ, Γ ⇒ ∆, ψ
→L →R
ϕ → ψ, Γ, Π ⇒ ∆, Λ Γ ⇒ ∆, ϕ → ψ

7.3 Structural Rules


We also need a few rules that allow us to rearrange sentences in the left and
right side of a sequent. Since the logical rules require that the sentences in the
premise which the rule acts upon stand either to the far left or to the far right,
we need an “exchange” rule that allows us to move sentences to the right
position. It’s also important sometimes to be able to combine two identical
sentences into one, and to add a sentence on either side.

Weakening

Γ ⇒ ∆ Γ ⇒ ∆
WL WR
ϕ, Γ ⇒ ∆ Γ ⇒ ∆, ϕ

Contraction

ϕ, ϕ, Γ ⇒ ∆ Γ ⇒ ∆, ϕ, ϕ
CL CR
ϕ, Γ ⇒ ∆ Γ ⇒ ∆, ϕ

Exchange

Γ, ϕ, ψ, Π ⇒ ∆ Γ ⇒ ∆, ϕ, ψ, Λ
XL XR
Γ, ψ, ϕ, Π ⇒ ∆ Γ ⇒ ∆, ψ, ϕ, Λ

A series of weakening, contraction, and exchange inferences will often be in-


dicated by double inference lines.
The following rule, called “cut,” is not strictly speaking necessary, but
makes it a lot easier to reuse and combine derivations.

Γ ⇒ ∆, ϕ ϕ, Π ⇒ Λ
Cut
Γ, Π ⇒ ∆, Λ

62 Release: (None) ((None))


7.4. DERIVATIONS

7.4 Derivations
We’ve said what an initial sequent looks like, and we’ve given the rules of
inference. Derivations in the sequent calculus are inductively generated from
these: each derivation either is an initial sequent on its own, or consists of one
or two derivations followed by an inference.

Definition 7.3 (LK derivation). An LK-derivation of a sequent S is a tree of


sequents satisfying the following conditions:

1. The topmost sequents of the tree are initial sequents.

2. The bottommost sequent of the tree is S.

3. Every sequent in the tree except S is a premise of a correct application of


an inference rule whose conclusion stands directly below that sequent
in the tree.

We then say that S is the end-sequent of the derivation and that S is derivable in
LK (or LK-derivable).

Example 7.4. Every initial sequent, e.g., χ ⇒ χ is a derivation. We can obtain


a new derivation from this by applying, say, the WL rule,

Γ ⇒ ∆
WL
ϕ, Γ ⇒ ∆
The rule, however, is meant to be general: we can replace the ϕ in the rule
with any sentence, e.g., also with θ. If the premise matches our initial sequent
χ ⇒ χ, that means that both Γ and ∆ are just χ, and the conclusion would
then be θ, χ ⇒ χ. So, the following is a derivation:

χ ⇒ χ
WL
θ, χ ⇒ χ
We can now apply another rule, say XL, which allows us to switch two sen-
tences on the left. So, the following is also a correct derivation:

χ ⇒ χ
WL
θ, χ ⇒ χ
XL
χ, θ ⇒ χ
In this application of the rule, which was given as

Γ, ϕ, ψ, Π ⇒ ∆
XL
Γ, ψ, ϕ, Π ⇒ ∆,
both Γ and Π were empty, ∆ is χ, and the roles of ϕ and ψ are played by θ
and χ, respectively. In much the same way, we also see that

Release: (None) ((None)) 63


CHAPTER 7. THE SEQUENT CALCULUS

θ ⇒ θ
WL
χ, θ ⇒ θ
is a derivation. Now we can take these two derivations, and combine them
using ∧R. That rule was

Γ ⇒ ∆, ϕ Γ ⇒ ∆, ψ
∧R
Γ ⇒ ∆, ϕ ∧ ψ
In our case, the premises must match the last sequents of the derivations end-
ing in the premises. That means that Γ is χ, θ, ∆ is empty, ϕ is χ and ψ is θ. So
the conclusion, if the inference should be correct, is χ, θ ⇒ χ ∧ θ. Of course,
we can also reverse the premises, then ϕ would be θ and ψ would be χ. So
both of the following are correct derivations.
χ ⇒ χ χ ⇒ χ
WL WL
θ, χ ⇒ χ θ ⇒ θ θ ⇒ θ θ, χ ⇒ χ
XL WL WL XL
χ, θ ⇒ χ χ, θ ⇒ θ χ, θ ⇒ θ χ, θ ⇒ χ
∧R ∧R
χ, θ ⇒ χ ∧ θ χ, θ ⇒ θ ∧ χ

7.5 Examples of Derivations


Example 7.5. Give an LK-derivation for the sequent ϕ ∧ ψ ⇒ ϕ.
We begin by writing the desired end-sequent at the bottom of the deriva-
tion.

ϕ∧ψ ⇒ ϕ
Next, we need to figure out what kind of inference could have a lower sequent
of this form. This could be a structural rule, but it is a good idea to start by
looking for a logical rule. The only logical connective occurring in the lower
sequent is ∧, so we’re looking for an ∧ rule, and since the ∧ symbol occurs in
the antecedent, we’re looking at the ∧L rule.

ϕ∧ψ ⇒ ϕ
∧L

There are two options for what could have been the upper sequent of the ∧L
inference: we could have an upper sequent of ϕ ⇒ ϕ, or of ψ ⇒ ϕ. Clearly,
ϕ ⇒ ϕ is an initial sequent (which is a good thing), while ψ ⇒ ϕ is not
derivable in general. We fill in the upper sequent:
ϕ ⇒ ϕ
ϕ∧ψ ⇒ ϕ
∧L

We now have a correct LK-derivation of the sequent ϕ ∧ ψ ⇒ ϕ.

Example 7.6. Give an LK-derivation for the sequent ¬ ϕ ∨ ψ ⇒ ϕ → ψ.


Begin by writing the desired end-sequent at the bottom of the derivation.

64 Release: (None) ((None))


7.5. EXAMPLES OF DERIVATIONS

¬ϕ ∨ ψ ⇒ ϕ → ψ
To find a logical rule that could give us this end-sequent, we look at the log-
ical connectives in the end-sequent: ¬, ∨, and →. We only care at the mo-
ment about ∨ and → because they are main operators of sentences in the end-
sequent, while ¬ is inside the scope of another connective, so we will take care
of it later. Our options for logical rules for the final inference are therefore the
∨L rule and the →R rule. We could pick either rule, really, but let’s pick the
→R rule (if for no reason other than it allows us to put off splitting into two
branches). According to the form of →R inferences which can yield the lower
sequent, this must look like:

ϕ, ¬ ϕ ∨ ψ ⇒ ψ
¬ ϕ ∨ ψ ⇒ ϕ → ψ →R
If we move ¬ ϕ ∨ ψ to the outside of the antecedent, we can apply the ∨L
rule. According to the schema, this must split into two upper sequents as
follows:

¬ ϕ, ϕ ⇒ ψ ψ, ϕ ⇒ ψ
¬ ϕ ∨ ψ, ϕ ⇒ ψ ∨L
XR
ϕ, ¬ ϕ ∨ ψ ⇒ ψ
¬ϕ ∨ ψ ⇒ ϕ → ψ →R
Remember that we are trying to wind our way up to initial sequents; we seem
to be pretty close! The right branch is just one weakening and one exchange
away from an initial sequent and then it is done:

ψ ⇒ ψ
WL
ϕ, ψ ⇒ ψ
XL
¬ ϕ, ϕ ⇒ ψ ψ, ϕ ⇒ ψ
¬ ϕ ∨ ψ, ϕ ⇒ ψ ∨L
XR
ϕ, ¬ ϕ ∨ ψ ⇒ ψ
¬ϕ ∨ ψ ⇒ ϕ → ψ →R

Now looking at the left branch, the only logical connective in any sentence
is the ¬ symbol in the antecedent sentences, so we’re looking at an instance of
the ¬L rule.

ψ ⇒ ψ
WL
ϕ ⇒ ψ, ϕ ϕ, ψ ⇒ ψ
¬ ϕ, ϕ ⇒ ψ ¬L ψ, ϕ ⇒ ψ
XL
¬ ϕ ∨ ψ, ϕ ⇒ ψ
∨L
XR
ϕ, ¬ ϕ ∨ ψ ⇒ ψ
¬ϕ ∨ ψ ⇒ ϕ→ψ
→R

Release: (None) ((None)) 65


CHAPTER 7. THE SEQUENT CALCULUS

Similarly to how we finished off the right branch, we are just one weakening
and one exchange away from finishing off this left branch as well.

ϕ ⇒ ϕ
ϕ ⇒ ϕ, ψ WR ψ ⇒ ψ
ϕ ⇒ ψ, ϕ XR ϕ, ψ ⇒ ψ
WL
¬ ϕ, ϕ ⇒ ψ ¬L ψ, ϕ ⇒ ψ
XL
¬ ϕ ∨ ψ, ϕ ⇒ ψ
∨L
XR
ϕ, ¬ ϕ ∨ ψ ⇒ ψ
¬ϕ ∨ ψ ⇒ ϕ→ψ
→R

Example 7.7. Give an LK-derivation of the sequent ¬ ϕ ∨ ¬ψ ⇒ ¬( ϕ ∧ ψ)


Using the techniques from above, we start by writing the desired end-
sequent at the bottom.

¬ ϕ ∨ ¬ψ ⇒ ¬( ϕ ∧ ψ)

The available main connectives of sentences in the end-sequent are the ∨ sym-
bol and the ¬ symbol. It would work to apply either the ∨L or the ¬R rule
here, but we start with the ¬R rule because it avoids splitting up into two
branches for a moment:

ϕ ∧ ψ, ¬ ϕ ∨ ¬ψ ⇒
¬R
¬ ϕ ∨ ¬ψ ⇒ ¬( ϕ ∧ ψ)

Now we have a choice of whether to look at the ∧L or the ∨L rule. Let’s see
what happens when we apply the ∧L rule: we have a choice to start with
either the sequent ϕ, ¬ ϕ ∨ ψ ⇒ or the sequent ψ, ¬ ϕ ∨ ψ ⇒ . Since the
proof is symmetric with regards to ϕ and ψ, let’s go with the former:

ϕ, ¬ ϕ ∨ ¬ψ ⇒
ϕ ∧ ψ, ¬ ϕ ∨ ¬ψ ⇒
∧L
¬R
¬ ϕ ∨ ¬ψ ⇒ ¬( ϕ ∧ ψ)

Continuing to fill in the derivation, we see that we run into a problem:

?
ϕ ⇒ ϕ ϕ ⇒ ψ
¬ ϕ, ϕ ⇒ ¬L ¬ψ, ϕ ⇒ ¬L
¬ ϕ ∨ ¬ψ, ϕ ⇒ ∨ L
ϕ, ¬ ϕ ∨ ¬ψ ⇒ XL
ϕ ∧ ψ, ¬ ϕ ∨ ¬ψ ⇒ ∧L
¬R
¬ ϕ ∨ ¬ψ ⇒ ¬( ϕ ∧ ψ)

66 Release: (None) ((None))


7.5. EXAMPLES OF DERIVATIONS

The top of the right branch cannot be reduced any further, and it cannot be
brought by way of structural inferences to an initial sequent, so this is not the
right path to take. So clearly, it was a mistake to apply the ∧L rule above.
Going back to what we had before and carrying out the ∨L rule instead, we
get

¬ ϕ, ϕ ∧ ψ ⇒ ¬ψ, ϕ ∧ ψ ⇒
¬ ϕ ∨ ¬ψ, ϕ ∧ ψ ⇒ ∨L
XL
ϕ ∧ ψ, ¬ ϕ ∨ ¬ψ ⇒
¬R
¬ ϕ ∨ ¬ψ ⇒ ¬( ϕ ∧ ψ)

Completing each branch as we’ve done before, we get

ϕ ⇒ ϕ ψ ⇒ ψ
ϕ∧ψ ⇒ ϕ
∧L ϕ∧ψ ⇒ ψ
∧L
¬ ϕ, ϕ ∧ ψ ⇒ ¬ L
¬ψ, ϕ ∧ ψ ⇒ ¬L
¬ ϕ ∨ ¬ψ, ϕ ∧ ψ ⇒ ∨L
XL
ϕ ∧ ψ, ¬ ϕ ∨ ¬ψ ⇒
¬R
¬ ϕ ∨ ¬ψ ⇒ ¬( ϕ ∧ ψ)

(We could have carried out the ∧ rules lower than the ¬ rules in these steps
and still obtained a correct derivation).

Example 7.8. So far we haven’t used the contraction rule, but it is sometimes
required. Here’s an example where that happens. Suppose we want to prove
⇒ A ∨ ¬ ϕ. Applying ∨R backwards would give us one of these two deriva-
tions:
ϕ ⇒
⇒ ϕ
∨ R ⇒ ¬ ϕ ¬R
⇒ ϕ ∨ ¬ϕ
⇒ ϕ ∨ ¬ ϕ ∨R
Neither of these of course ends in an initial sequent. The trick is to realize that
the contraction rule allows us to combine two copies of a sentence into one—
and when we’re searching for a proof, i.e., going from bottom to top, we can
keep a copy of ϕ ∨ ¬ ϕ in the premise, e.g.,

⇒ ϕ ∨ ¬ ϕ, ϕ
⇒ ϕ ∨ ¬ ϕ, ϕ ∨ ¬ ϕ ∨R
⇒ ϕ ∨ ¬ϕ CR

Now we can apply ∨R a second time, and also get ¬ ϕ, which leads to a com-
plete derivation.

Release: (None) ((None)) 67


CHAPTER 7. THE SEQUENT CALCULUS

ϕ ⇒ ϕ
⇒ ϕ, ¬ ϕ ¬R
⇒ ϕ, ϕ ∨ ¬ ϕ ∨R
⇒ ϕ ∨ ¬ ϕ, ϕ XR
⇒ ϕ ∨ ¬ ϕ, ϕ ∨ ¬ ϕ ∨R
⇒ ϕ ∨ ¬ϕ CR

This section collects the definitions of the provability relation and con-
sistency for natural deduction.

7.6 Proof-Theoretic Notions


Just as we’ve defined a number of important semantic notions (validity, entail-
ment, satisfiabilty), we now define corresponding proof-theoretic notions. These
are not defined by appeal to satisfaction of sentences in structures, but by ap-
peal to the derivability or non-derivability of certain sequents. It was an im-
portant discovery that these notions coincide. That they do is the content of
the soundness and completeness theorem.

Definition 7.9 (Theorems). A sentence ϕ is a theorem if there is a derivation


in LK of the sequent ⇒ ϕ. We write ` ϕ if ϕ is a theorem and 0 ϕ if it is not.

Definition 7.10 (Derivability). A sentence ϕ is derivable from a set of sentences Γ,


Γ ` ϕ, iff there is a finite subset Γ0 ⊆ Γ and a sequence Γ00 of the sentences
in Γ0 such that LK derives Γ00 ⇒ ϕ. If ϕ is not derivable from Γ we write Γ 0 ϕ.

Because of the contraction, weakening, and exchange rules, the order and
number of sentences in Γ00 does not matter: if a sequent Γ00 ⇒ ϕ is deriv-
able, then so is Γ000 ⇒ ϕ for any Γ000 that contains the same sentences as Γ00 .
For instance, if Γ0 = {ψ, χ} then both Γ00 = hψ, ψ, χi and Γ000 = hχ, χ, ψi are
sequences containing just the sentences in Γ0 . If a sequent containing one is
derivable, so is the other, e.g.:

ψ, ψ, χ ⇒ ϕ
CL
ψ, χ ⇒ ϕ
XL
χ, ψ ⇒ ϕ
WL
χ, χ, ψ ⇒ ϕ

From now on we’ll say that if Γ0 is a finite set of sentences then Γ0 ⇒ ϕ is


any sequent where the antecedent is a sequence of sentences in Γ0 and tacitly
include contractions, exchanges, and weakenings if necessary.

68 Release: (None) ((None))


7.6. PROOF-THEORETIC NOTIONS

Definition 7.11 (Consistency). A set of sentences Γ is inconsistent iff there is a


finite subset Γ0 ⊆ Γ such that LK derives Γ0 ⇒ . If Γ is not inconsistent, i.e.,
if for every finite Γ0 ⊆ Γ, LK does not derive Γ0 ⇒ , we say it is consistent.

Proposition 7.12 (Reflexivity). If ϕ ∈ Γ, then Γ ` ϕ.

Proof. The initial sequent ϕ ⇒ ϕ is derivable, and { ϕ} ⊆ Γ.

Proposition 7.13 (Monotony). If Γ ⊆ ∆ and Γ ` ϕ, then ∆ ` ϕ.

Proof. Suppose Γ ` ϕ, i.e., there is a finite Γ0 ⊆ Γ such that Γ0 ⇒ ϕ is deriv-


able. Since Γ ⊆ ∆, then Γ0 is also a finite subset of ∆. The derivation of Γ0 ⇒ ϕ
thus also shows ∆ ` ϕ.

Proposition 7.14 (Transitivity). If Γ ` ϕ and { ϕ} ∪ ∆ ` ψ, then Γ ∪ ∆ ` ψ.

Proof. If Γ ` ϕ, there is a finite Γ0 ⊆ Γ and a derivation π0 of Γ0 ⇒ ϕ. If


{ ϕ} ∪ ∆ ` ψ, then for some finite subset ∆ 0 ⊆ ∆, there is a derivation π1 of
ϕ, ∆ 0 ⇒ ψ. Consider the following derivation:

π0 π1

Γ0 ⇒ ϕ ϕ, ∆ 0 ⇒ ψ
Cut
Γ0 , ∆ 0 ⇒ ψ
Since Γ0 ∪ ∆ 0 ⊆ Γ ∪ ∆, this shows Γ ∪ ∆ ` ψ.

Note that this means that in particular if Γ ` ϕ and ϕ ` ψ, then Γ ` ψ. It


follows also that if ϕ1 , . . . , ϕn ` ψ and Γ ` ϕi for each i, then Γ ` ψ.

Proposition 7.15. Γ is inconsistent iff Γ ` ϕ for every sentence ϕ.

Proof. Exercise.

Proposition 7.16 (Compactness). 1. If Γ ` ϕ then there is a finite subset Γ0 ⊆


Γ such that Γ0 ` ϕ.

2. If every finite subset of Γ is consistent, then Γ is consistent.

Proof. 1. If Γ ` ϕ, then there is a finite subset Γ0 ⊆ Γ such that the sequent


Γ0 ⇒ ϕ has a derivation. Consequently, Γ0 ` ϕ.

2. If Γ is inconsistent, there is a finite subset Γ0 ⊆ Γ such that LK derives


Γ0 ⇒ . But then Γ0 is a finite subset of Γ that is inconsistent.

Release: (None) ((None)) 69


CHAPTER 7. THE SEQUENT CALCULUS

7.7 Derivability and Consistency


We will now establish a number of properties of the derivability relation. They
are independently interesting, but each will play a role in the proof of the
completeness theorem.

Proposition 7.17. If Γ ` ϕ and Γ ∪ { ϕ} is inconsistent, then Γ is inconsistent.

Proof. There are finite Γ0 and Γ1 ⊆ Γ such that LK derives Γ0 ⇒ ϕ and ϕ, Γ1 ⇒


. Let the LK-derivation of Γ0 ⇒ ϕ be π0 and the LK-derivation of Γ1 , ϕ ⇒
be π1 . We can then derive

π0 π1

Γ0 ⇒ ϕ ϕ, Γ1 ⇒
Cut
Γ0 , Γ1 ⇒
Since Γ0 ⊆ Γ and Γ1 ⊆ Γ, Γ0 ∪ Γ1 ⊆ Γ, hence Γ is inconsistent.

Proposition 7.18. Γ ` ϕ iff Γ ∪ {¬ ϕ} is inconsistent.

Proof. First suppose Γ ` ϕ, i.e., there is a derivation π0 of Γ ⇒ ϕ. By adding


a ¬L rule, we obtain a derivation of ¬ ϕ, Γ ⇒ , i.e., Γ ∪ {¬ ϕ} is inconsistent.
If Γ ∪ {¬ A} is inconsistent, there is a derivation π1 of ¬ ϕ, Γ ⇒ . The
following is a derivation of Γ ⇒ ϕ:

π1
ϕ ⇒ ϕ
⇒ ϕ, ¬ ϕ ¬R ¬ ϕ, Γ ⇒
Cut
Γ ⇒ ϕ

Proposition 7.19. If Γ ` ϕ and ¬ ϕ ∈ Γ, then Γ is inconsistent.

Proof. Suppose Γ ` ϕ and ¬ ϕ ∈ Γ. Then there is a derivation π of a sequent


Γ0 ⇒ ϕ. The sequent ¬ ϕ, Γ0 ⇒ is also derivable:

π ϕ ⇒ ϕ
¬ ϕ, ϕ ⇒ ¬L
Γ0 ⇒ ϕ ϕ, ¬ ϕ ⇒ XL
Cut
Γ, ¬ ϕ ⇒
Since ¬ ϕ ∈ Γ and Γ0 ⊆ Γ, this shows that Γ is inconsistent.

Proposition 7.20. If Γ ∪ { ϕ} and Γ ∪ {¬ ϕ} are both inconsistent, then Γ is incon-


sistent.

70 Release: (None) ((None))


7.8. DERIVABILITY AND THE PROPOSITIONAL CONNECTIVES

Proof. There are finite sets Γ0 ⊆ Γ and Γ1 ⊆ Γ and LK-derivations π0 and π1


of ϕ, Γ0 ⇒ and ¬ ϕ, Γ1 ⇒ , respectively. We can then derive

π0
π1
ϕ, Γ0 ⇒
¬R
Γ0 ⇒ ¬ ϕ ¬ ϕ, Γ1 ⇒
Cut
Γ0 , Γ1 ⇒
Since Γ0 ⊆ Γ and Γ1 ⊆ Γ, Γ0 ∪ Γ1 ⊆ Γ. Hence Γ is inconsistent.

7.8 Derivability and the Propositional Connectives


Proposition 7.21. 1. Both ϕ ∧ ψ ` ϕ and ϕ ∧ ψ ` ψ.

2. ϕ, ψ ` ϕ ∧ ψ.

Proof. 1. Both sequents ϕ ∧ ψ ⇒ ϕ and ϕ ∧ ψ ⇒ ψ are derivable:

ϕ ⇒ ϕ ψ ⇒ ψ
ϕ∧ψ ⇒ ϕ
∧L ∧L
ϕ∧ψ ⇒ ψ

2. Here is a derivation of the sequent ϕ, ψ ⇒ ϕ ∧ ψ:

ϕ ⇒ ϕ ψ ⇒ ψ
ϕ, ψ ⇒ ϕ ∧ ψ
∧R

Proposition 7.22. 1. ϕ ∨ ψ, ¬ ϕ, ¬ψ is inconsistent.

2. Both ϕ ` ϕ ∨ ψ and ψ ` ϕ ∨ ψ.

Proof. 1. We give a derivation of the sequent ϕ ∨ ψ, ¬ ϕ, ¬ψ ⇒:

ϕ ⇒ ϕ ψ ⇒ ψ
¬ ϕ, ϕ ⇒ ¬L ¬ψ, ψ ⇒ ¬L
ϕ, ¬ ϕ, ¬ψ ⇒ ψ, ¬ ϕ, ¬ψ ⇒
ϕ ∨ ψ, ¬ ϕ, ¬ψ ⇒
∨L

(Recall that double inference lines indicate several weakening, contrac-


tion, and exchange inferences.)

2. Both sequents ϕ ⇒ ϕ ∨ ψ and ψ ⇒ ϕ ∨ ψ have derivations:

Release: (None) ((None)) 71


CHAPTER 7. THE SEQUENT CALCULUS

ϕ ⇒ ϕ ψ ⇒ ψ
ϕ ⇒ ϕ∨ψ
∨R ∨R
ψ ⇒ ϕ∨ψ

Proposition 7.23. 1. ϕ, ϕ → ψ ` ψ.

2. Both ¬ ϕ ` ϕ → ψ and ψ ` ϕ → ψ.

Proof. 1. The sequent ϕ → ψ, ϕ ⇒ ψ is derivable:

ϕ ⇒ ϕ ψ ⇒ ψ
ϕ → ψ, ϕ ⇒ ψ
→L

2. Both sequents ¬ ϕ ⇒ ϕ → ψ and ψ ⇒ ϕ → ψ are derivable:

ϕ ⇒ ϕ
¬ ϕ, ϕ ⇒ ¬L ψ ⇒ ψ
ϕ, ¬ ϕ ⇒ XL WL
ϕ, ψ ⇒ ψ
ϕ, ¬ ϕ ⇒ ψ WR ψ ⇒ ϕ→ψ
→R
¬ϕ ⇒ ϕ → ψ →R

7.9 Soundness
A derivation system, such as the sequent calculus, is sound if it cannot de-
rive things that do not actually hold. Soundness is thus a kind of guaranteed
safety property for derivation systems. Depending on which proof theoretic
property is in question, we would like to know for instance, that

1. every derivable ϕ is a tautology;

2. if a sentence is derivable from some others, it is also a consequence of


them;

3. if a set of sentences is inconsistent, it is unsatisfiable.

These are important properties of a derivation system. If any of them do not


hold, the derivation system is deficient—it would derive too much. Conse-
quently, establishing the soundness of a derivation system is of the utmost
importance.
Because all these proof-theoretic properties are defined via derivability in
the sequent calculus of certain sequents, proving (1)–(3) above requires prov-
ing something about the semantic properties of derivable sequents. We will
first define what it means for a sequent to be valid, and then show that every
derivable sequent is valid. (1)–(3) then follow as corollaries from this result.

72 Release: (None) ((None))


7.9. SOUNDNESS

Definition 7.24. A valuation v satisfies a sequent Γ ⇒ ∆ iff either v 2 ϕ for


some ϕ ∈ Γ or v  ϕ for some ϕ ∈ ∆.
A sequent is valid iff every valuation v satisfies it.

Theorem 7.25 (Soundness). If LK derives Θ ⇒ Ξ, then Θ ⇒ Ξ is valid.

Proof. Let π be a derivation of Θ ⇒ Ξ. We proceed by induction on the num-


ber of inferences n in π.
If the number of inferences is 0, then π consists only of an initial sequent.
Every initial sequent ϕ ⇒ ϕ is obviously valid, since for every v, either v 2 ϕ
or v  ϕ.
If the number of inferences is greater than 0, we distinguish cases accord-
ing to the type of the lowermost inference. By induction hypothesis, we can
assume that the premises of that inference are valid, since the number of in-
ferences in the proof of any premise is smaller than n.
First, we consider the possible inferences with only one premise.

1. The last inference is a weakening. Then Θ ⇒ Ξ is either A, Γ ⇒ ∆ (if the


last inference is WL) or Γ ⇒ ∆, ϕ (if it’s WR), and the derivation ends in
one of

Γ ⇒ ∆ Γ ⇒ ∆
WL WR
ϕ, Γ ⇒ ∆ Γ ⇒ ∆, ϕ

By induction hypothesis, Γ ⇒ ∆ is valid, i.e., for every valuation v, either


there is some χ ∈ Γ such that v 2 χ or there is some χ ∈ ∆ such that
v  χ.
If v 2 χ for some χ ∈ Γ, then χ ∈ Θ as well since Θ = ϕ, Γ, and so v 2 χ
for some χ ∈ Θ. Similarly, if v  χ for some χ ∈ ∆, as χ ∈ Ξ, v  χ for
some χ ∈ Ξ. Consequently, Θ ⇒ Ξ is valid.

2. The last inference is ¬L: Then the premise of the last inference is Γ ⇒
∆, ϕ and the conclusion is ¬ ϕ, Γ ⇒ ∆, i.e., the derivation ends in

Γ ⇒ ∆, ϕ
¬L
¬ ϕ, Γ ⇒ ∆

and Θ = ¬ ϕ, Γ while Ξ = ∆.
The induction hypothesis tells us that Γ ⇒ ∆, ϕ is valid, i.e., for every v,
either (a) for some χ ∈ Γ, v 2 χ, or (b) for some χ ∈ ∆, v  χ, or (c) v  ϕ.

Release: (None) ((None)) 73


CHAPTER 7. THE SEQUENT CALCULUS

We want to show that Θ ⇒ Ξ is also valid. Let v be a valuation. If (a)


holds, then there is χ ∈ Γ so that v 2 ϕ, but ϕ ∈ Θ as well. If (b) holds,
there is χ ∈ ∆ such that v  χ, but χ ∈ Ξ as well. Finally, if v  ϕ, then
v 2 ¬ ϕ. Since ¬ ϕ ∈ Θ, there is χ ∈ Θ such that v 2 χ. Consequently,
Θ ⇒ Ξ is valid.

3. The last inference is ¬R: Exercise.

4. The last inference is ∧L: There are two variants: ϕ ∧ ψ may be inferred
on the left from ϕ or from ψ on the left side of the premise. In the first
case, the π ends in

ϕ, Γ ⇒ ∆
∧L
ϕ ∧ ψ, Γ ⇒ ∆

and Θ = ϕ ∧ ψ, Γ while Ξ = ∆. Consider a valuation v. Since by induc-


tion hypothesis, ϕ, Γ ⇒ ∆ is valid, (a) v 2 ϕ, (b) v 2 χ for some χ ∈ Γ, or
(c) v  χ for some χ ∈ ∆. In case (a), v 2 ϕ ∧ ψ, so there is χ ∈ Θ (namely,
ϕ ∧ ψ) such that v 2 χ. In case (b), there is χ ∈ Γ such that v 2 χ, and
χ ∈ Θ as well. In case (c), there is χ ∈ ∆ such that v  χ, and χ ∈ Ξ
as well since Ξ = ∆. So in each case, v satisfies ϕ ∧ ψ, Γ ⇒ ∆. Since v
was arbitrary, Γ ⇒ ∆ is valid. The case where ϕ ∧ ψ is inferred from ψ is
handled the same, changing ϕ to ψ.

5. The last inference is ∨R: There are two variants: ϕ ∨ ψ may be inferred
on the right from ϕ or from ψ on the right side of the premise. In the first
case, π ends in

Γ ⇒ ∆, ϕ
∨R
Γ ⇒ ∆, ϕ ∨ ψ

Now Θ = Γ and Ξ = ∆, ϕ ∨ ψ. Consider a valuation v. Since Γ ⇒ ∆, ϕ is


valid, (a) v  ϕ, (b) v 2 χ for some χ ∈ Γ, or (c) v  χ for some χ ∈ ∆. In
case (a), v  ϕ ∨ ψ. In case (b), there is χ ∈ Γ such that v 2 χ. In case (c),
there is χ ∈ ∆ such that v  χ. So in each case, v satisfies Γ ⇒ ∆, ϕ ∨ ψ,
i.e., Θ ⇒ Ξ. Since v was arbitrary, Θ ⇒ Ξ is valid. The case where ϕ ∨ ψ
is inferred from ψ is handled the same, changing ϕ to ψ.

6. The last inference is →R: Then π ends in

74 Release: (None) ((None))


7.9. SOUNDNESS

ϕ, Γ ⇒ ∆, ϕ
→R
Γ ⇒ ∆, ϕ → ψ

Again, the induction hypothesis says that the premise is valid; we want
to show that the conclusion is valid as well. Let v be arbitrary. Since
ϕ, Γ ⇒ ∆, ψ is valid, at least one of the following cases obtains: (a) v 2 ϕ,
(b) v  ψ, (c) v 2 χ for some χ ∈ Γ, or (c) v  χ for some χ ∈ ∆. In cases
(a) and (b), v  ϕ → ψ and so there is a χ ∈ ∆, ϕ → ψ such that v  χ. In
case (c), for some χ ∈ Γ, v 2 χ. In case (d), for some χ ∈ ∆, v  χ. In
each case, v satisfies Γ ⇒ ∆, ϕ → ψ. Since v was arbitrary, Γ ⇒ ∆, ϕ → ψ
is valid.

Now let’s consider the possible inferences with two premises.

1. The last inference is a cut: then π ends in

Γ ⇒ ∆, ϕ ϕ, Π ⇒ Λ
Cut
Γ, Π ⇒ ∆, Λ

Let v be a valuation. By induction hypothesis, the premises are valid, so


v satisfies both premises. We distinguish two cases: (a) v 2 ϕ and (b)
v  ϕ. In case (a), in order for v to satisfy the left premise, it must satisfy
Γ ⇒ ∆. But then it also satisfies the conclusion. In case (b), in order for
v to satisfy the right premise, it must satisfy Π \ Λ. Again, v satisfies the
conclusion.

2. The last inference is ∧R. Then π ends in

Γ ⇒ ∆, ϕ Γ ⇒ ∆, ψ
∧R
Γ ⇒ ∆, ϕ ∧ ψ

Consider a valuation v. If v satisfies Γ ⇒ ∆, we are done. So suppose


it doesn’t. Since Γ ⇒ ∆, ϕ is valid by induction hypothesis, v  ϕ.
Similarly, since Γ ⇒ ∆, ψ is valid, v  ψ. But then v  ϕ ∧ ψ.

3. The last inference is ∨L: Exercise.

4. The last inference is →L. Then π ends in

Release: (None) ((None)) 75


CHAPTER 7. THE SEQUENT CALCULUS

Γ ⇒ ∆, ϕ ψ, Π ⇒ Λ
→L
ϕ → ψ, Γ, Π ⇒ ∆, Λ

Again, consider a valuation v and suppose v doesn’t satisfy Γ, Π ⇒


Λ, Π. We have to show that v 2 ϕ → ψ. If v doesn’t satisfy Γ, Π ⇒ Λ, Π,
it satisfies neither Γ ⇒ ∆ nor Π ⇒ Λ. Since, Γ ⇒ ∆, ϕ is valid, we have
v  ϕ. Since ψ, Π ⇒ Λ is valid, we have v 2 ψ. But then v 2 ϕ → ψ,
which is what we wanted to show.

Corollary 7.26. If ` ϕ then ϕ is a tautology.

Corollary 7.27. If Γ ` ϕ then Γ  ϕ.

Proof. If Γ ` ϕ then for some finite subset Γ0 ⊆ Γ, there is a derivation of


Γ0 ⇒ ϕ. By ??, every valuation v either makes some ψ ∈ Γ0 false or makes ϕ
true. Hence, if v  Γ then also v  ϕ.

Corollary 7.28. If Γ is satisfiable, then it is consistent.

Proof. We prove the contrapositive. Suppose that Γ is not consistent. Then


there is a finite Γ0 ⊆ Γ and a derivation of Γ0 ⇒ . By ??, Γ0 ⇒ is valid.
In other words, for every valuation v, there is χ ∈ Γ0 so that v 2 χ, and since
Γ0 ⊆ Γ, that χ is also in Γ. Thus, no v satisfies Γ, and Γ is not satisfiable.

Problems
Problem 7.1. Give derivations of the following sequents:

1. ⇒ ¬( ϕ → ψ) → ( ϕ ∧ ¬ψ)

2. ( ϕ ∧ ψ) → χ ⇒ ( ϕ → χ) ∨ (ψ → χ)

Problem 7.2. Prove ??

Problem 7.3. Prove that Γ ` ¬ ϕ iff Γ ∪ { ϕ} is inconsistent.

Problem 7.4. Complete the proof of ??.

76 Release: (None) ((None))


Chapter 8

Natural Deduction

This chapter presents a natural deduction system in the style of


Gentzen/Prawitz.
To include or exclude material relevant to natural deduction as a proof
system, use the “prfND” tag.

8.1 Rules and Derivations


Natural deduction systems are meant to closely parallel the informal reason-
ing used in mathematical proof (hence it is somewhat “natural”). Natural
deduction proofs begin with assumptions. Inference rules are then applied.
Assumptions are “discharged” by the ¬Intro, →Intro, and ∨Elim inference
rules, and the label of the discharged assumption is placed beside the infer-
ence for clarity.

Definition 8.1 (Initial Formula). An initial formula or assumption is any formula


in the topmost position of any branch.

Derivations in natural deduction are certain trees of sentences, where the


topmost sentences are assumptions, and if a sentence stands below one, two,
or three other sequents, it must follow correctly by a rule of inference. The sen-
tences at the top of the inference are called the premises and the sentence below
the conclusion of the inference. The rules come in pairs, an introduction and
an elimination rule for each logical operator. They introduce a logical opera-
tor in the conclusion or remove a logical operator from a premise of the rule.
Some of the rules allow an assumption of a certain type to be discharged. To
indicate which assumption is discharged by which inference, we also assign
labels to both the assumption and the inference. This is indicated by writing
the assumption as “[ ϕ]n ”.

77
CHAPTER 8. NATURAL DEDUCTION

It is customary to consider rules for all logical operators, even for those (if
any) that we consider as defined.

8.2 Propositional Rules

Rules for ∧

ϕ∧ψ
ϕ ∧Elim
ϕ ψ
∧Intro
ϕ∧ψ ϕ∧ψ
ψ
∧Elim

Rules for ∨

ϕ [ ϕ]n [ψ]n
∨Intro
ϕ∨ψ
ψ
∨Intro ϕ∨ψ χ χ
ϕ∨ψ n ∨Elim
χ

Rules for →

[ ϕ]n
ϕ→ψ ϕ
ψ
→Elim
ψ
n →Intro
ϕ→ψ

Rules for ¬

[ ϕ]n
¬ϕ ϕ
¬Elim


¬ ϕ ¬Intro
n

78 Release: (None) ((None))


8.3. DERIVATIONS

Rules for ⊥

[¬ ϕ]n

⊥ ⊥
ϕ I

n
⊥ ⊥
ϕ C

Note that ¬Intro and ⊥C are very similar: The difference is that ¬Intro derives
a negated sentence ¬ ϕ but ⊥C a positive sentence ϕ.

8.3 Derivations
We’ve said what an assumption is, and we’ve given the rules of inference.
Derivations in natural deduction are inductively generated from these: each
derivation either is an assumption on its own, or consists of one, two, or three
derivations followed by a correct inference.

Definition 8.2 (Derivation). A derivation of a sentence ϕ from assumptions Γ


is a tree of sentences satisfying the following conditions:

1. The topmost sentences of the tree are either in Γ or are discharged by an


inference in the tree.

2. The bottommost sentence of the tree is ϕ.

3. Every sentence in the tree except ϕ is a premise of a correct application of


am inference rule whose conclusion stands directly below that sentence
in the tree.

We then say that ϕ is the conclusion of the derivation and that ϕ is derivable
from Γ.

Example 8.3. Every assumption on its own is a derivation. So, e.g., χ by itself
is a derivation, and so is θ by itself. We can obtain a new derivation from these
by applying, say, the ∧Intro rule,

ϕ ψ
∧Intro
ϕ∧ψ
These rules are meant to be general: we can replace the ϕ and ψ in it with any
sentences, e.g., by χ and θ. Then the conclusion would be χ ∧ θ, and so

χ θ
∧Intro
χ∧θ

Release: (None) ((None)) 79


CHAPTER 8. NATURAL DEDUCTION

is a correct derivation. Of course, we can also switch the assumptions, so that


θ plays the role of ϕ and χ that of ψ. Thus,

θ χ
∧Intro
θ∧χ

is also a correct derivation.


We can now apply another rule, say, →Intro, which allows us to conclude
a conditional and allows us to discharge any assumption that is identical to
the conclusion of that conditional. So both of the following would be correct
derivations:

[ χ ]1 θ χ [ θ ]1
∧Intro ∧Intro
χ∧θ χ∧θ
1 →Intro 1 →Intro
χ → (χ ∧ θ ) θ → (χ ∧ θ )

8.4 Examples of Derivations


Example 8.4. Let’s give a derivation of the sentence ( ϕ ∧ ψ) → ϕ.
We begin by writing the desired conclusion at the bottom of the derivation.

( ϕ ∧ ψ) → ϕ

Next, we need to figure out what kind of inference could result in a sen-
tence of this form. The main operator of the conclusion is →, so we’ll try to
arrive at the conclusion using the →Intro rule. It is best to write down the as-
sumptions involved and label the inference rules as you progress, so it is easy
to see whether all assumptions have been discharged at the end of the proof.

[ ϕ ∧ ψ ]1

ϕ
1 →Intro
( ϕ ∧ ψ) → ϕ

We now need to fill in the steps from the assumption ϕ ∧ ψ to ϕ. Since we


only have one connective to deal with, ∧, we must use the ∧ elim rule. This
gives us the following proof:

[ ϕ ∧ ψ ]1
ϕ ∧Elim
1 →Intro
( ϕ ∧ ψ) → ϕ

We now have a correct derivation of ( ϕ ∧ ψ) → ϕ.

80 Release: (None) ((None))


8.4. EXAMPLES OF DERIVATIONS

Example 8.5. Now let’s give a derivation of (¬ ϕ ∨ ψ) → ( ϕ → ψ).


We begin by writing the desired conclusion at the bottom of the derivation.

(¬ ϕ ∨ ψ) → ( ϕ → ψ)

To find a logical rule that could give us this conclusion, we look at the logical
connectives in the conclusion: ¬, ∨, and →. We only care at the moment about
the first occurence of → because it is the main operator of the sentence in the
end-sequent, while ¬, ∨ and the second occurence of → are inside the scope
of another connective, so we will take care of those later. We therefore start
with the →Intro rule. A correct application must look as follows:

[¬ ϕ ∨ ψ]1

ϕ→ψ
1 →Intro
(¬ ϕ ∨ ψ) → ( ϕ → ψ)

This leaves us with two possibilities to continue. Either we can keep work-
ing from the bottom up and look for another application of the →Intro rule, or
we can work from the top down and apply a ∨Elim rule. Let us apply the lat-
ter. We will use the assumption ¬ ϕ ∨ ψ as the leftmost premise of ∨Elim. For
a valid application of ∨Elim, the other two premises must be identical to the
conclusion ϕ → ψ, but each may be derived in turn from another assumption,
namely the two disjuncts of ¬ ϕ ∨ ψ. So our derivation will look like this:

[¬ ϕ]2 [ ψ ]2

[¬ ϕ ∨ ψ]1 ϕ→ψ ϕ→ψ


2
ϕ→ψ
∨Elim
1 →Intro
(¬ ϕ ∨ ψ) → ( ϕ → ψ)

In each of the two branches on the right, we want to derive ϕ → ψ, which


is best done using →Intro.

[¬ ϕ]2 , [ ϕ]3 [ ψ ]2 , [ ϕ ]4

ψ ψ
3 →Intro 4 →Intro
[¬ ϕ ∨ ψ]1 ϕ→ψ ϕ→ψ
2
ϕ→ψ
∨Elim
1 →Intro
(¬ ϕ ∨ ψ) → ( ϕ → ψ)

Release: (None) ((None)) 81


CHAPTER 8. NATURAL DEDUCTION

For the two missing parts of the derivation, we need derivations of ψ from
¬ ϕ and ϕ in the middle, and from ϕ and ψ on the left. Let’s take the former
first. ¬ ϕ and ϕ are the two premises of ¬Elim:

[¬ ϕ]2 [ ϕ ]3
¬Elim

ψ
By using ⊥ I , we can obtain ψ as a conclusion and complete the branch.

[ ψ ]2 , [ ϕ ]4
[¬ ϕ]2 [ ϕ ]3
⊥Intro
⊥ ⊥
I
ψ ψ
3 →Intro 4 →Intro
[¬ ϕ ∨ ψ]1 ϕ→ψ ϕ→ψ
2
ϕ→ψ
∨Elim
1 →Intro
(¬ ϕ ∨ ψ) → ( ϕ → ψ)
Let’s now look at the rightmost branch. Here it’s important to realize that
the definition of derivation allows assumptions to be discharged but does not re-
quire them to be. In other words, if we can derive ψ from one of the assump-
tions ϕ and ψ without using the other, that’s ok. And to derive ψ from ψ is
trivial: ψ by itself is such a derivation, and no inferences are needed. So we
can simply delete the assumtion ϕ.

[¬ ϕ]2 [ ϕ ]3
¬Elim
⊥ ⊥
I
ψ [ ψ ]2
3 →Intro →Intro
[¬ ϕ ∨ ψ]1 ϕ→ψ ϕ→ψ
2
ϕ→ψ
∨Elim
1 →Intro
(¬ ϕ ∨ ψ) → ( ϕ → ψ)
Note that in the finished derivation, the rightmost →Intro inference does not
actually discharge any assumptions.

Example 8.6. So far we have not needed the ⊥C rule. It is special in that it al-
lows us to discharge an assumption that isn’t a sub-formula of the conclusion
of the rule. It is closely related to the ⊥ I rule. In fact, the ⊥ I rule is a special
case of the ⊥C rule—there is a logic called “intuitionistic logic” in which only
⊥ I is allowed. The ⊥C rule is a last resort when nothing else works. For in-
stance, suppose we want to derive ϕ ∨ ¬ ϕ. Our usual strategy would be to
attempt to derive ϕ ∨ ¬ ϕ using ∨Intro. But this would require us to derive
either ϕ or ¬ ϕ from no assumptions, and this can’t be done. ⊥C to the rescue!

82 Release: (None) ((None))


8.4. EXAMPLES OF DERIVATIONS

[¬( ϕ ∨ ¬ ϕ)]1

1
⊥ ⊥C
ϕ ∨ ¬ϕ

Now we’re looking for a derivation of ⊥ from ¬( ϕ ∨ ¬ ϕ). Since ⊥ is the


conclusion of ¬Elim we might try that:

[¬( ϕ ∨ ¬ ϕ)]1 [¬( ϕ ∨ ¬ ϕ)]1

¬ϕ ϕ
¬Elim
1
⊥ ⊥C
ϕ ∨ ¬ϕ
Our strategy for finding a derivation of ¬ ϕ calls for an application of ¬Intro:

[¬( ϕ ∨ ¬ ϕ)]1 , [ ϕ]2


[¬( ϕ ∨ ¬ ϕ)]1


2
¬ ϕ ¬Intro ϕ
¬Elim
1
⊥ ⊥C
ϕ ∨ ¬ϕ

Here, we can get ⊥ easily by applying ¬Elim to the assumption ¬( ϕ ∨ ¬ ϕ)


and ϕ ∨ ¬ ϕ which follows from our new assumption ϕ by ∨Intro:

[ ϕ ]2 [¬( ϕ ∨ ¬ ϕ)]1
[¬( ϕ ∨ ¬ ϕ)]1 ϕ ∨ ¬ ϕ ∨Intro
¬Elim

2
¬ϕ ¬ Intro ϕ
¬Elim
1
⊥ ⊥C
ϕ ∨ ¬ϕ
On the right side we use the same strategy, except we get ϕ by ⊥C :

[ ϕ ]2 [¬ ϕ]3
[¬( ϕ ∨ ¬ ϕ)]1 ϕ ∨ ¬ϕ ∨ Intro [¬( ϕ ∨ ¬ ϕ)] 1 ϕ ∨ ¬ ϕ ∨Intro
¬Elim ¬Elim
⊥ ⊥ ⊥
2
¬ϕ ¬ Intro 3
ϕ C
¬Elim
1
⊥ ⊥C
ϕ ∨ ¬ϕ

Release: (None) ((None)) 83


CHAPTER 8. NATURAL DEDUCTION

8.5 Proof-Theoretic Notions

This section collects the definitions the provability relation and consis-
tency for natural deduction.

Just as we’ve defined a number of important semantic notions (validity, entail-


ment, satisfiabilty), we now define corresponding proof-theoretic notions. These
are not defined by appeal to satisfaction of sentences in structures, but by ap-
peal to the derivability or non-derivability of certain sentences from others. It
was an important discovery that these notions coincide. That they do is the
content of the soundness and completeness theorems.

Definition 8.7 (Theorems). A sentence ϕ is a theorem if there is a derivation


of ϕ in natural deduction in which all assumptions are discharged. We write
` ϕ if ϕ is a theorem and 0 ϕ if it is not.

Definition 8.8 (Derivability). A sentence ϕ is derivable from a set of sentences Γ,


Γ ` ϕ, if there is a derivation with conclusion ϕ and in which every assump-
tion is either discharged or is in Γ. If ϕ is not derivable from Γ we write Γ 0 ϕ.

Definition 8.9 (Consistency). A set of sentences Γ is inconsistent iff Γ ` ⊥. If


Γ is not inconsistent, i.e., if Γ 0 ⊥, we say it is consistent.

Proposition 8.10 (Reflexivity). If ϕ ∈ Γ, then Γ ` ϕ.

Proof. The assumption ϕ by itself is a derivation of ϕ where every undis-


charged assumption (i.e., ϕ) is in Γ.

Proposition 8.11 (Monotony). If Γ ⊆ ∆ and Γ ` ϕ, then ∆ ` ϕ.

Proof. Any derivation of ϕ from Γ is also a derivation of ϕ from ∆.

Proposition 8.12 (Transitivity). If Γ ` ϕ and { ϕ} ∪ ∆ ` ψ, then Γ ∪ ∆ ` ψ.

Proof. If Γ ` ϕ, there is a derivation δ0 of ϕ with all undischarged assumptions


in Γ. If { ϕ} ∪ ∆ ` ψ, then there is a derivation δ1 of ψ with all undischarged
assumptions in { ϕ} ∪ ∆. Now consider:

∆, [ ϕ]1

δ1 Γ
δ0
ψ
1 →Intro
ϕ→ψ ϕ
ψ
→Elim

84 Release: (None) ((None))


8.6. DERIVABILITY AND CONSISTENCY

The undischarged assumptions are now all among Γ ∪ ∆, so this shows Γ ∪


∆ ` ψ.

Note that this means that in particular if Γ ` ϕ and ϕ ` ψ, then Γ ` ψ. It


follows also that if ϕ1 , . . . , ϕn ` ψ and Γ ` ϕi for each i, then Γ ` ψ.

Proposition 8.13. Γ is inconsistent iff Γ ` ϕ for every sentence ϕ.

Proof. Exercise.

Proposition 8.14 (Compactness). 1. If Γ ` ϕ then there is a finite subset Γ0 ⊆


Γ such that Γ0 ` ϕ.

2. If every finite subset of Γ is consistent, then Γ is consistent.

Proof. 1. If Γ ` ϕ, then there is a derivation δ of ϕ from Γ. Let Γ0 be the set


of undischarged assumptions of δ. Since any derivation is finite, Γ0 can
only contain finitely many sentences. So, δ is a derivation of ϕ from a
finite Γ0 ⊆ Γ.

2. This is the contrapositive of (1) for the special case ϕ ≡ ⊥.

8.6 Derivability and Consistency


We will now establish a number of properties of the derivability relation. They
are independently interesting, but each will play a role in the proof of the
completeness theorem.

Proposition 8.15. If Γ ` ϕ and Γ ∪ { ϕ} is inconsistent, then Γ is inconsistent.

Proof. Let the derivation of ϕ from Γ be δ1 and the derivation of ⊥ from Γ ∪


{ ϕ} be δ2 . We can then derive:

Γ, [ ϕ]1
Γ
δ2
δ1

¬ ϕ ¬Intro
1
ϕ
¬Elim

In the new derivation, the assumption ϕ is discharged, so it is a derivation
from Γ.

Proposition 8.16. Γ ` ϕ iff Γ ∪ {¬ ϕ} is inconsistent.

Proof. First suppose Γ ` ϕ, i.e., there is a derivation δ0 of ϕ from undischarged


assumptions Γ. We obtain a derivation of ⊥ from Γ ∪ {¬ ϕ} as follows:

Release: (None) ((None)) 85


CHAPTER 8. NATURAL DEDUCTION

Γ
δ0
¬ϕ ϕ
¬Elim

Now assume Γ ∪ {¬ ϕ} is inconsistent, and let δ1 be the corresponding
derivation of ⊥ from undischarged assumptions in Γ ∪ {¬ ϕ}. We obtain
a derivation of ϕ from Γ alone by using ⊥C :

Γ, [¬ ϕ]1

δ1

⊥ ⊥
ϕ C

Proposition 8.17. If Γ ` ϕ and ¬ ϕ ∈ Γ, then Γ is inconsistent.

Proof. Suppose Γ ` ϕ and ¬ ϕ ∈ Γ. Then there is a derivation δ of ϕ from Γ.


Consider this simple application of the ¬Elim rule:

δ
¬ϕ ϕ
¬Elim

Since ¬ ϕ ∈ Γ, all undischarged assumptions are in Γ, this shows that Γ `
⊥.

Proposition 8.18. If Γ ∪ { ϕ} and Γ ∪ {¬ ϕ} are both inconsistent, then Γ is incon-


sistent.

Proof. There are derivations δ1 and δ2 of ⊥ from Γ ∪ { ϕ} and ⊥ from Γ ∪ {¬ ϕ},


respectively. We can then derive

Γ, [¬ ϕ]2 Γ, [ ϕ]1

δ2 δ1

⊥ ⊥
2
¬¬ ϕ ¬Intro 1
¬ ϕ ¬Intro
¬Elim

Since the assumptions ϕ and ¬ ϕ are discharged, this is a derivation of ⊥
from Γ alone. Hence Γ is inconsistent.

86 Release: (None) ((None))


8.7. DERIVABILITY AND THE PROPOSITIONAL CONNECTIVES

8.7 Derivability and the Propositional Connectives


Proposition 8.19. 1. Both ϕ ∧ ψ ` ϕ and ϕ ∧ ψ ` ψ

2. ϕ, ψ ` ϕ ∧ ψ.

Proof. 1. We can derive both

ϕ∧ψ ϕ∧ψ
ϕ ∧Elim ψ
∧Elim

2. We can derive:

ϕ ψ
∧Intro
ϕ∧ψ

Proposition 8.20. 1. ϕ ∨ ψ, ¬ ϕ, ¬ψ is inconsistent.

2. Both ϕ ` ϕ ∨ ψ and ψ ` ϕ ∨ ψ.

Proof. 1. Consider the following derivation:

¬ϕ [ ϕ ]1 ¬ψ [ ψ ]1
¬Elim ¬Elim
ϕ∨ψ ⊥ ⊥
1 ∨Elim

This is a derivation of ⊥ from undischarged assumptions ϕ ∨ ψ, ¬ ϕ, and
¬ψ.
2. We can derive both

ϕ ψ
∨Intro ∨Intro
ϕ∨ψ ϕ∨ψ

Proposition 8.21. 1. ϕ, ϕ → ψ ` ψ.

2. Both ¬ ϕ ` ϕ → ψ and ψ ` ϕ → ψ.

Proof. 1. We can derive:

ϕ→ψ ψ
ψ
→Elim

2. This is shown by the following two derivations:

Release: (None) ((None)) 87


CHAPTER 8. NATURAL DEDUCTION

¬ϕ [ ϕ ]1
¬Elim ψ
⊥ ⊥
I →Intro
ψ ϕ→ψ
1 →Intro
ϕ→ψ

Note that →Intro may, but does not have to, discharge the assumption ϕ.

8.8 Soundness
A derivation system, such as natural deduction, is sound if it cannot derive
things that do not actually follow. Soundness is thus a kind of guaranteed
safety property for derivation systems. Depending on which proof theoretic
property is in question, we would like to know for instance, that

1. every derivable sentence is a tautology;

2. if a sentence is derivable from some others, it is also a consequence of


them;

3. if a set of sentences is inconsistent, it is unsatisfiable.

These are important properties of a derivation system. If any of them do not


hold, the derivation system is deficient—it would derive too much. Conse-
quently, establishing the soundness of a derivation system is of the utmost
importance.

Theorem 8.22 (Soundness). If ϕ is derivable from the undischarged assumptions


Γ, then Γ  ϕ.

Proof. Let δ be a derivation of ϕ. We proceed by induction on the number of


inferences in δ.
For the induction basis we show the claim if the number of inferences is 0.
In this case, δ consists only of an initial formula. Every initial formula ϕ is
an undischarged assumption, and as such, any valuation v that satisfies all of
the undischarged assumptions of the proof also satisfies ϕ.
Now for the inductive step. Suppose that δ contains n inferences. The
premise(s) of the lowermost inference are derived using sub-derivations, each
of which contains fewer than n inferences. We assume the induction hypothe-
sis: The premises of the last inference follow from the undischarged assump-
tions of the sub-derivations ending in those premises. We have to show that
ϕ follows from the undischarged assumptions of the entire proof.
We distinguish cases according to the type of the lowermost inference.
First, we consider the possible inferences with only one premise.

1. Suppose that the last inference is ¬Intro: The derivation has the form

88 Release: (None) ((None))


8.8. SOUNDNESS

Γ, [ ϕ]n

δ1


¬ ϕ ¬Intro
n

By inductive hypothesis, ⊥ follows from the undischarged assumptions


Γ ∪ { ϕ} of δ1 . Consider a valuation v. We need to show that, if v  Γ,
then v  ¬ ϕ. Suppose for reductio that v  Γ, but v 2 ¬ ϕ, i.e., v  ϕ.
This would mean that v  Γ ∪ { ϕ}. This is contrary to our inductive
hypothesis. So, v  ¬ ϕ.

2. The last inference is ∧Elim: There are two variants: ϕ or ψ may be in-
ferred from the premise ϕ ∧ ψ. Consider the first case. The derivation δ
looks like this:

Γ
δ1

ϕ∧ψ
ϕ ∧Elim

By inductive hypothesis, ϕ ∧ ψ follows from the undischarged assump-


tions Γ of δ1 . Consider a structure v. We need to show that, if v  Γ,
then v  ϕ. Suppose v  Γ. By our inductive hypothesis (Γ  ϕ ∨ ψ), we
know that v  ϕ ∧ ψ. By definition, v  ϕ ∧ ψ iff v  ϕ and v  ψ. (The
case where ψ is inferred from ϕ ∧ ψ is handled similarly.)

3. The last inference is ∨Intro: There are two variants: ϕ ∨ ψ may be in-
ferred from the premise ϕ or the premise ψ. Consider the first case. The
derivation has the form

Γ
δ1
ϕ
∨Intro
ϕ∨ψ

By inductive hypothesis, ϕ follows from the undischarged assumptions Γ


of δ1 . Consider a valuation v. We need to show that, if v  Γ, then
v  ϕ ∨ ψ. Suppose v  Γ; then v  ϕ since Γ  ϕ (the inductive hypoth-
esis). So it must also be the case that v  ϕ ∨ ψ. (The case where ϕ ∨ ψ is
inferred from ψ is handled similarly.)

4. The last inference is →Intro: ϕ → ψ is inferred from a subproof with


assumption ϕ and conclusion ψ, i.e.,

Release: (None) ((None)) 89


CHAPTER 8. NATURAL DEDUCTION

Γ, [ ϕ]n

δ1

ψ
n →Intro
ϕ→ψ

By inductive hypothesis, ψ follows from the undischarged assumptions


of δ1 , i.e., Γ ∪ { ϕ}  ψ. Consider a valuation v. The undischarged as-
sumptions of δ are just Γ, since ϕ is discharged at the last inference. So
we need to show that Γ  ϕ → ψ. For reductio, suppose that for some
valuation v, v  Γ but v 2 ϕ → ψ. So, v  ϕ and v 2 ψ. But by hypothesis,
ψ is a consequence of Γ ∪ { ϕ}, i.e., v  ψ, which is a contradiction. So,
Γ  ϕ → ψ.
5. The last inference is ⊥ I : Here, δ ends in

Γ
δ1

⊥ ⊥
ϕ I

By induction hypothesis, Γ  ⊥. We have to show that Γ  ϕ. Suppose


not; then for some v we have v  Γ and v 2 ϕ. But we always have v 2 ⊥,
so this would mean that Γ 2 ⊥, contrary to the induction hypothesis.
6. The last inference is ⊥C : Exercise.

Now let’s consider the possible inferences with several premises: ∨Elim,
∧Intro, and →Elim.
1. The last inference is ∧Intro. ϕ ∧ ψ is inferred from the premises ϕ and ψ
and δ has the form

Γ1 Γ2

δ1 δ2

ϕ ψ
∧Intro
ϕ∧ψ

By induction hypothesis, ϕ follows from the undischarged assumptions Γ1


of δ1 and ψ follows from the undischarged assumptions Γ2 of δ2 . The
undischarged assumptions of δ are Γ1 ∪ γ2 , so we have to show that
Γ1 ∪ Γ2  ϕ ∧ ψ. Consider a valuation v with v  Γ1 ∪ Γ2 . Since v  Γ1 ,
it must be the case that v  ϕ as Γ1  ϕ, and since v  Γ2 , v  ψ since
Γ2  ψ. Together, v  ϕ ∧ ψ.

90 Release: (None) ((None))


8.8. SOUNDNESS

2. The last inference is ∨Elim: Exercise.

3. The last inference is →Elim. ψ is inferred from the premises ϕ → ψ


and ϕ. The derivation δ looks like this:

Γ1 Γ2
δ1 δ2
ϕ→ψ ϕ
ψ
→Elim

By induction hypothesis, ϕ → ψ follows from the undischarged assump-


tions Γ1 of δ1 and ϕ follows from the undischarged assumptions Γ2 of δ2 .
Consider a valuation v. We need to show that, if v  Γ1 ∪ Γ2 , then v  ψ.
Suppose v  Γ1 ∪ Γ2 . Since Γ1  ϕ → ψ, v  ϕ → ψ. Since Γ2  ϕ, we
have v  ϕ. This means that v  ψ (For if v 2 ψ, since v  ϕ, we’d have
v 2 ϕ → ψ, contradicting v  ϕ → ψ).

4. The last inference is ¬Elim: Exercise.

Corollary 8.23. If ` ϕ, then ϕ is a tautology.

Corollary 8.24. If Γ is satisfiable, then it is consistent.

Proof. We prove the contrapositive. Suppose that Γ is not consistent. Then


Γ ` ⊥, i.e., there is a derivation of ⊥ from undischarged assumptions in Γ.
By ??, any valuation v that satisfies Γ must satisfy ⊥. Since v 2 ⊥ for every
valuation v, no v can satisfy Γ, i.e., Γ is not satisfiable.

Problems
Problem 8.1. Give derivations of the following:

1. ¬( ϕ → ψ) → ( ϕ ∧ ¬ψ)

2. ( ϕ → χ) ∨ (ψ → χ) from the assumption ( ϕ ∧ ψ) → χ

Problem 8.2. Prove ??

Problem 8.3. Prove that Γ ` ¬ ϕ iff Γ ∪ { ϕ} is inconsistent.

Problem 8.4. Complete the proof of ??.

Release: (None) ((None)) 91


Chapter 9

Tableaux

This chapter presents a signed analytic tableaux system.


To include or exclude material relevant to natural deduction as a proof
system, use the “prfTab” tag.

9.1 Rules and Tableaux


A tableau is a systematic survey of the possible ways a sentence can be true
or false in a structure. The bulding blocks of a tableau are signed formulas:
sentences plus a truth value “sign,” either T or F. These signed formulas are
arranged in a (downward growing) tree.

Definition 9.1. A signed formula is a pair consisting of a truth value and a sen-
tence, i.e., either:
T ϕ or F ϕ.

Intuitively, we might read T ϕ as “ϕ might be true” and F ϕ as “ϕ might be


false” (in some structure).
Each signed formula in the tree is either an assumption (which are listed at
the very top of the tree), or it is obtained from a signed formula above it by
one of a number of rules of inference. There are two rules for each possible
main operator of the preceding formula, one for the case when the sign is T,
and one for the case where the sign is F. Some rules allow the tree to branch,
and some only add signed formulas to the branch. A rule may be (and often
must be) applied not to the immediately preceding signed formula, but to any
signed formula in the branch from the root to the place the rule is applied.
A branch is closed when it contains both T ϕ and F ϕ. A closed tableau
is one where every branch is closed. Under the intuitive interpretation, any
branch describes a joint possibility, but T ϕ and F ϕ are not jointly possible. In
other words, if a branch is closed, the possibility it describes has been ruled

92
9.2. PROPOSITIONAL RULES

out. In particular, that means that a closed tableau rules out all possibilities
of simultaneously making every assumption of the form T ϕ true and every
assumption of the form F ϕ false.
A closed tableau for ϕ is a closed tableau with root F ϕ. If such a closed
tableau exists, all possibilities for ϕ being false have been ruled out; i.e., ϕ
must be true in every structure.

9.2 Propositional Rules

Rules for ¬

T¬ ϕ F ¬ϕ
¬T ¬F
Fϕ Tϕ

Rules for ∧

Tϕ ∧ ψ
∧T Fϕ ∧ ψ
Tϕ ∧F
F ϕ | Fψ

Rules for ∨

Fϕ ∨ ψ
Tϕ ∨ ψ ∨F
∨T Fϕ
T ϕ | Tψ

Rules for →

Fϕ → ψ
Tϕ → ψ →F
→T Tϕ
F ϕ | Tψ

Release: (None) ((None)) 93


CHAPTER 9. TABLEAUX

The Cut Rule

Cut
Tϕ | Fϕ

The Cut rule is not applied “to” a previous signed formula; rather, it allows
every branch in a tableau to be split in two, one branch containing T ϕ, the
other F ϕ. It is not necessary—any set of signed formulas with a closed tableau
has one not using Cut—but it allows us to combine tableaux in a convenient
way.

9.3 Tableaux
We’ve said what an assumption is, and we’ve given the rules of inference.
Tableaux are inductively generated from these: each tableau either is a single
branch consisting of one or more assumptions, or it results from a tableau by
applying one of the rules of inference on a branch.

Definition 9.2 (Tableau). A tableau for assumptions S1ϕ1 , . . . , Snϕn (where


each Si is either T or F) is a tree of signed formulas satisfying the following
conditions:

1. The n topmost signed formulas of the tree are Si ϕi , one below the other.

2. Every signed formula in the tree that is not one of the assumptions re-
sults from a correct application of an inference rule to a signed formula
in the branch above it.

A branch of a tableau is closed iff it contains both T ϕ and F ϕ, and open other-
wise. A tableau in which every branch is closed is a closed tableau (for its set
of assumptions). If a tableau is not closed, i.e., if it contains at least one open
branch, it is open.

Example 9.3. Every set of assumptions on its own is a tableau, but it will gen-
erally not be closed. (Obviously, it is closed only if the assumptions already
contain a pair of signed formulas T ϕ and F ϕ.)
From a tableau (open or closed) we can obtain a new, larger one by ap-
plying one of the rules of inference to a signed formula ϕ in it. The rule will
append one or more signed formulas to the end of any branch containing the
occurrence of ϕ to which we apply the rule.
For instance, consider the assumption T ϕ ∧ ¬ ϕ. Here is the (open) tableau
consisting of just that assumption:

1. T ϕ ∧ ¬ϕ Assumption

94 Release: (None) ((None))


9.4. EXAMPLES OF TABLEAUX

We obtain a new tableau from it by applying the ∧T rule to the assumption.


That rule allows us to add two new lines to the tableau, T ϕ and T ¬ ϕ:

1. T ϕ ∧ ¬ϕ Assumption
2. Tϕ ∧T 1
3. T¬ ϕ ∧T 1

When we write down tableaux, we record the rules we’ve applied on the right
(e.g., ∧T1 means that the signed formula on that line is the result of applying
the ∧T rule to the signed formula on line 1). This new tableau now contains
additional signed formulas, but to only one (T ¬ ϕ) can we apply a rule (in this
case, the ¬T rule). This results in the closed tableau

1. T ϕ ∧ ¬ϕ Assumption
2. Tϕ ∧T 1
3. T¬ ϕ ∧T 1
4. Fϕ ¬T 3

9.4 Examples of Tableaux


Example 9.4. Let’s find a closed tableau for the sentence ( ϕ ∧ ψ) → ϕ.
We begin by writing the corresponding assumption at the top of the tableau.

1. F ( ϕ ∧ ψ) → ϕ Assumption

There is only one assumption, so only one signed formula to which we can
apply a rule. (For every signed formula, there is always at most one rule that
can be applied: it’s the rule for the corresponding sign and main operator of
the sentence.) In this case, this means, we must apply →F.

1. F ( ϕ ∧ ψ) → ϕ X Assumption
2. Tϕ ∧ ψ →F 1
3. Fϕ →F 1

To keep track of which signed formulas we have applied their corresponding


rules to, we write a checkmark next to the sentence. However, only write a
checkmark if the rule has been applied to all open branches. Once a signed
formula has had the corresponding rule applied in every open branch, we will
not have to return to it and apply the rule again. In this case, there is only one
branch, so the rule only has to be applied once. (Note that checkmarks are
only a convenience for constructing tableaux and are not officially part of the
syntax of tableaux.)
There is one new signed formula to which we can apply a rule: the T ϕ ∧ ψ
on line 3. Applying the ∧T rule results in:

Release: (None) ((None)) 95


CHAPTER 9. TABLEAUX

1. F ( ϕ ∧ ψ) → ϕ X Assumption
2. Tϕ ∧ ψ X →F 1
3. Fϕ →F 1
4. Tϕ ∧T 2
5. Tψ ∧T 2

Since the branch now contains both T ϕ (on line 4) and F ϕ (on line 3), the
branch is closed. Since it is the only branch, the tableau is closed. We have
found a closed tableau for ( ϕ ∧ ψ) → ϕ.

Example 9.5. Now let’s find a closed tableau for (¬ ϕ ∨ ψ) → ( ϕ → ψ).


We begin with the corresponding assumption:

1. F (¬ ϕ ∨ ψ) → ( ϕ → ψ) Assumption

The one signed formula in this tableau has main operator → and sign F, so
we apply the →F rule to it to obtain:

1. F (¬ ϕ ∨ ψ) → ( ϕ → ψ) X Assumption
2. T¬ ϕ ∨ ψ →F 1
3. F ( ϕ → ψ) →F 1

We now have a choice as to whether to apply ∨T to line 2 or →F to line 3. It


actually doesn’t matter which order we pick, as long as each signed formula
has its corresponding rule applied in every branch. So let’s pick the first one.
The ∨T rule allows the tableau to branch, and the two conclusions of the rule
will be the new signed formulas added to the two new branches. This results
in:

1. F (¬ ϕ ∨ ψ) → ( ϕ → ψ) X Assumption
2. T¬ ϕ ∨ ψ X →F 1
3. F ( ϕ → ψ) →F 1

4. T¬ ϕ Tψ ∨T 2

We have not applied the →F rule to line 3 yet: let’s do that now. To save
time, we apply it to both branches. Recall that we write a checkmark next
to a signed formula only if we have applied the corresponding rule in every
open branch. So it’s a good idea to apply a rule at the end of every branch that
contains the signed formula the rule applies to. That way we won’t have to
return to that signed formula lower down in the various branches.

96 Release: (None) ((None))


9.4. EXAMPLES OF TABLEAUX

1. F (¬ ϕ ∨ ψ) → ( ϕ → ψ) X Assumption
2. T¬ ϕ ∨ ψ X →F 1
3. F ( ϕ → ψ) X →F 1

4. T¬ ϕ Tψ ∨T 2
5. Tϕ Tϕ →F 3
6. Fψ Fψ →F 3

The right branch is now closed. On the left branch, we can still apply the ¬T
rule to line 4. This results in F ϕ and closes the left branch:

1. F (¬ ϕ ∨ ψ) → ( ϕ → ψ) X Assumption
2. T¬ ϕ ∨ ψ X →F 1
3. F ( ϕ → ψ) X →F 1

4. T¬ ϕ Tψ ∨T 2
5. Tϕ Tϕ →F 3
6. Fψ Fψ →F 3
7. Fϕ ⊗ ¬T 4

Example 9.6. We can give tableaux for any number of signed formulas as
assumptions. Often it is also necessary to apply more than one rule that allows
branching; and in general a tableau can have any number of branches. For
instance, consider a tableau for {T ϕ ∨ (ψ ∧ χ), F ( ϕ ∨ ψ) ∧ ( ϕ ∨ χ)}. We start
by applying the ∨T to the first assumption:

1. T ϕ ∨ (ψ ∧ χ) X Assumption
2. F ( ϕ ∨ ψ) ∧ ( ϕ ∨ χ) Assumption

3. Tϕ Tψ ∧ χ ∨T 1

Now we can apply the ∧F rule to line 2. We do this on both branches simul-
taneously, and can therefore check off line 2:

1. T ϕ ∨ (ψ ∧ χ) X Assumption
2. F ( ϕ ∨ ψ) ∧ ( ϕ ∨ χ) X Assumption

3. Tϕ Tψ ∧ χ ∨T 1

4. Fϕ ∨ ψ Fϕ ∨ χ Fϕ ∨ ψ Fϕ ∨ χ ∧F 2

Release: (None) ((None)) 97


CHAPTER 9. TABLEAUX

Now we can apply ∨F to all the branches containing ϕ ∨ ψ:

1. T ϕ ∨ (ψ ∧ χ) X Assumption
2. F ( ϕ ∨ ψ) ∧ ( ϕ ∨ χ) X Assumption

3. Tϕ Tψ ∧ χ ∨T 1

4. Fϕ ∨ ψ X Fϕ ∨ χ Fϕ ∨ ψ X Fϕ ∨ χ ∧F 2
5. Fϕ Fϕ ∨F 4
6. Fψ Fψ ∨F 4

The leftmost branch is now closed. Let’s now apply ∨F to ϕ ∨ χ:

1. T ϕ ∨ (ψ ∧ χ) X Assumption
2. F ( ϕ ∨ ψ) ∧ ( ϕ ∨ χ) X Assumption

3. Tϕ Tψ ∧ χ ∨T 1

4. Fϕ ∨ ψ X Fϕ ∨ χ X Fϕ ∨ ψ X Fϕ ∨ χ X ∧F 2
5. Fϕ Fϕ ∨F 4
6. Fψ Fψ ∨F 4
7. ⊗ Fϕ Fϕ ∨F 4
8. Fχ Fχ ∨F 4

Note that we moved the result of applying ∨F a second time below for clarity.
In this instance it would not have been needed, since the justifications would
have been the same.
Two branches remain open, and Tψ ∧ χ on line 3 remains unchecked. We
apply ∧T to it to obtain a closed tableau:

98 Release: (None) ((None))


9.5. PROOF-THEORETIC NOTIONS

1. T ϕ ∨ (ψ ∧ χ) X Assumption
2. F ( ϕ ∨ ψ) ∧ ( ϕ ∨ χ) X Assumption

3. Tϕ Tψ ∧ χ X ∨T 1

4. Fϕ ∨ ψ X Fϕ ∨ χ X Fϕ ∨ ψ X Fϕ ∨ χ X ∧F 2
5. Fϕ Fϕ Fϕ Fϕ ∨F 4
6. Fψ Fχ Fψ Fχ ∨F 4
7. ⊗ ⊗ Tψ Tψ ∧T 3
8. Tχ Tχ ∧T 3
⊗ ⊗

For comparison, here’s a closed tableau for the same set of assumptions in
which the rules are applied in a different order:

1. T ϕ ∨ (ψ ∧ χ) X Assumption
2. F ( ϕ ∨ ψ) ∧ ( ϕ ∨ χ) X Assumption

3. Fϕ ∨ ψ X Fϕ ∨ χ X ∧F 2
4. Fϕ Fϕ ∨F 3
5. Fψ Fχ ∨F 3

6. Tϕ Tψ ∧ χ X Tϕ Tψ ∧ χ X ∨T 1
7. ⊗ Tψ ⊗ Tψ ∧T 3
8. Tχ Tχ ∧T 3
⊗ ⊗

9.5 Proof-Theoretic Notions

This section collects the definitions of the provability relation and con-
sistency for tableaux.

Just as we’ve defined a number of important semantic notions (validity, entail-


ment, satisfiabilty), we now define corresponding proof-theoretic notions. These
are not defined by appeal to satisfaction of sentences in structures, but by ap-
peal to the existence of certain closed tableaux. It was an important discovery
that these notions coincide. That they do is the content of the soundness and
completeness theorems.

Release: (None) ((None)) 99


CHAPTER 9. TABLEAUX

Definition 9.7 (Theorems). A sentence ϕ is a theorem if there is a closed tableau


for F ϕ. We write ` ϕ if ϕ is a theorem and 0 ϕ if it is not.

Definition 9.8 (Derivability). A sentence ϕ is derivable from a set of sentences Γ,


Γ ` ϕ, iff there is a finite set {ψ1 , . . . , ψn } ⊆ Γ and a closed tableau for the set

{F ϕ, Tψ1 , . . . , Tψn , }

If ϕ is not derivable from Γ we write Γ 0 ϕ.

Definition 9.9 (Consistency). A set of sentences Γ is inconsistent iff there is a


finite set {ψ1 , . . . , ψn } ⊆ Γ and a closed tableau for the set

{Tψ1 , . . . , Tψn , }.

If Γ is not inconsistent, we say it is consistent.

Proposition 9.10 (Reflexivity). If ϕ ∈ Γ, then Γ ` ϕ.

Proof. If ϕ ∈ Γ, { ϕ} is a finite subset of Γ and the tableau

1. Fϕ Assumption
2. Tϕ Assumption

is closed.

Proposition 9.11 (Monotony). If Γ ⊆ ∆ and Γ ` ϕ, then ∆ ` ϕ.

Proof. Any finite subset of Γ is also a finite subset of ∆.

Proposition 9.12 (Transitivity). If Γ ` ϕ and { ϕ} ∪ ∆ ` ψ, then Γ ∪ ∆ ` ψ.

Proof. If { ϕ} ∪ ∆ ` ψ, then there is a finite subset ∆ 0 = {χ1 , . . . , χn } ⊆ ∆ such


that

{F ψ,T ϕ, Tχ1 , . . . , Tχn }

has a closed tableau. If Γ ` ϕ then there are θ1 , . . . , θm such that

{F ϕ,Tθ1 , . . . , Tθm }

has a closed tableau.


Now consider the tableau with assumptions

F ψ, Tχ1 , . . . , Tχn , Tθ1 , . . . , Tθm .

100 Release: (None) ((None))


9.6. DERIVABILITY AND CONSISTENCY

Apply the Cut rule on ϕ. This generates two branches, one has T ϕ in it, the
other F ϕ. Thus, on the one branch, all of

{F ψ, T ϕ, Tχ1 , . . . , Tχn }

are available. Since there is a closed tableau for these assumptions, we can
attach it to that branch; every branch through T ϕ1 closes. On the other branch,
all of
{F ϕ, Tθ1 , . . . , Tθm }
are available, so we can also complete the other side to obtain a closed tableau.
This shows Γ ∪ ∆ ` ψ.

Note that this means that in particular if Γ ` ϕ and ϕ ` ψ, then Γ ` ψ. It


follows also that if ϕ1 , . . . , ϕn ` ψ and Γ ` ϕi for each i, then Γ ` ψ.

Proposition 9.13. Γ is inconsistent iff Γ ` ϕ for every sentence ϕ.

Proof. Exercise.

Proposition 9.14 (Compactness). 1. If Γ ` ϕ then there is a finite subset Γ0 ⊆


Γ such that Γ0 ` ϕ.

2. If every finite subset of Γ is consistent, then Γ is consistent.

Proof. 1. If Γ ` ϕ, then there is a finite subset Γ0 = {ψ1 , . . . , ψn } and a


closed tableau for
F ϕ, Tψ1 , · · · Tψn

This tableau also shows Γ0 ` ϕ.

2. If Γ is inconsistent, then for some finite subset Γ0 = {ψ1 , . . . , ψn } there is


a closed tableau for
Tψ1 , · · · Tψn

This closed tableau shows that Γ0 is inconsistent.

9.6 Derivability and Consistency


We will now establish a number of properties of the derivability relation. They
are independently interesting, but each will play a role in the proof of the
completeness theorem.

Proposition 9.15. If Γ ` ϕ and Γ ∪ { ϕ} is inconsistent, then Γ is inconsistent.

Release: (None) ((None)) 101


CHAPTER 9. TABLEAUX

Proof. There are finite Γ0 = {ψ1 , . . . , ψn } and Γ1 = {χ1 , . . . , χn } ⊆ Γ such that

{F ϕ,Tψ1 , . . . , Tψn }
{T ¬ ϕ,Tχ1 , . . . , Tχm }

have closed tableaux. Using the Cut rule on ϕ we can combine these into a
single closed tableau that shows Γ0 ∪ Γ1 is inconsistent. Since Γ0 ⊆ Γ and
Γ1 ⊆ Γ, Γ0 ∪ Γ1 ⊆ Γ, hence Γ is inconsistent.

Proposition 9.16. Γ ` ϕ iff Γ ∪ {¬ ϕ} is inconsistent.

Proof. First suppose Γ ` ϕ, i.e., there is a closed tableau for

{F ϕ, Tψ1 , . . . , Tψn }

Using the ¬T rule, this can be turned into a closed tableau for

{T ¬ ϕ, Tψ1 , . . . , Tψn }.

On the other hand, if there is a closed tableau for the latter, we can turn it
into a closed tableau of the former by removing every formula that results
from ¬T applied to the first assumption T ¬ ϕ as well as that assumption,
and adding the assumption F ϕ. For if a branch was closed before because
it contained the conclusion of ¬T applied to T ¬ ϕ, i.e., F ϕ, the corresponding
branch in the new tableau is also closed. If a branch in the old tableau was
closed because it contained the assumption T ¬ ϕ as well as F ¬ ϕ we can turn
it into a closed branch by applying ¬F to F ¬ ϕ to obtain T ϕ. This closes the
branch since we added F ϕ as an assumption.

Proposition 9.17. If Γ ` ϕ and ¬ ϕ ∈ Γ, then Γ is inconsistent.

Proof. Suppose Γ ` ϕ and ¬ ϕ ∈ Γ. Then there are ψ1 , . . . , ψn ∈ Γ such that

{F ϕ, Tψ1 , . . . , Tψn }

has a closed tableau. Replace the assumption F ϕ by T ¬ ϕ, and insert the


conclusion of ¬T applied to F ϕ after the assumptions. Any sentence in the
tableau justified by appeal to line 1 in the old tableau is now justified by appeal
to line n + 1. So if the old tableau was closed, the new one is. It shows that Γ
is inconsistent, since all assumptions are in Γ.

Proposition 9.18. If Γ ∪ { ϕ} and Γ ∪ {¬ ϕ} are both inconsistent, then Γ is incon-


sistent.

Proof. If there are ψ1 , . . . , ψn ∈ Γ and χ1 , . . . , χm ∈ Γ such that

{T ϕ,Tψ1 , . . . , Tψn }
{T ¬ ϕ,Tχ1 , . . . , Tχm }

102 Release: (None) ((None))


9.7. DERIVABILITY AND THE PROPOSITIONAL CONNECTIVES

both have closed tableaux, we can construct a tableau that shows that Γ is
inconsistent by using as assumptions Tψ1 , . . . , Tψn together with Tχ1 , . . . ,
Tχm , followed by an application of the Cut rule, yielding two branches, one
starting with T ϕ, the other with F ϕ. Add on the part below the assumptions
of the first tableau on the left side. Here, every rule application is still correct,
and every branch closes. On the right side, add the part below the assump-
tions of the seond tableau, with the results of any applications of ¬T to T ¬ ϕ
removed.
For if a branch was closed before because it contained the conclusion of
¬T applied to T ¬ ϕ, i.e., F ϕ, as well as F ϕ, the corresponding branch in the
new tableau is also closed. If a branch in the old tableau was closed because
it contained the assumption T ¬ ϕ as well as F ¬ ϕ we can turn it into a closed
branch by applying ¬F to F ¬ ϕ to obtain T ϕ.

9.7 Derivability and the Propositional Connectives


Proposition 9.19. 1. Both ϕ ∧ ψ ` ϕ and ϕ ∧ ψ ` ψ.
2. ϕ, ψ ` ϕ ∧ ψ.

Proof. 1. Both {F ϕ, T ϕ ∧ ψ} and {F ψ, T ϕ ∧ ψ} have closed tableaux

1. Fϕ Assumption
2. Tϕ ∧ ψ Assumption
3. Tϕ ∧T 2
4. Tψ ∧T 2

1. Fψ Assumption
2. Tϕ ∧ ψ Assumption
3. Tϕ ∧T 2
4. Tψ ∧T 2

2. Here is a closed tableau for {T ϕ, Tψ, F ϕ ∧ ψ}:

1. Fϕ ∧ ψ Assumption
2. Tϕ Assumption
3. Tψ Assumption

4. Fϕ Fψ ∧F 1
⊗ ⊗

Release: (None) ((None)) 103


CHAPTER 9. TABLEAUX

Proposition 9.20. 1. ϕ ∨ ψ, ¬ ϕ, ¬ψ is inconsistent.

2. Both ϕ ` ϕ ∨ ψ and ψ ` ϕ ∨ ψ.

Proof. 1. We give a closed tableau of {T ϕ ∨ ψ, T ¬ ϕ, T ¬ψ}:

1. Tϕ ∨ ψ Assumption
2. T¬ ϕ Assumption
3. T ¬ψ Assumption
4. Fϕ ¬T 2
5. Fψ ¬T 3

6. Tϕ Tψ ∨T 1
⊗ ⊗

2. Both {F ϕ ∨ ψ, T ϕ} and {F ϕ ∨ ψ, Tψ} have closed tableaux:

1. Fϕ ∨ ψ Assumption
2. Tϕ Assumption
3. Fϕ ∨F 1
4. Fψ ∨F 1

1. Fϕ ∨ ψ Assumption
2. Tψ Assumption
3. Fϕ ∨F 1
4. Fψ ∨F 1

Proposition 9.21. 1. ϕ, ϕ → ψ ` ψ.

2. Both ¬ ϕ ` ϕ → ψ and ψ ` ϕ → ψ.

Proof. 1. {F ψ, T ϕ → ψ, T ϕ} has a closed tableau:

104 Release: (None) ((None))


9.8. SOUNDNESS

1. Fψ Assumption
2. Tϕ → ψ Assumption
3. Tϕ Assumption

4. Fϕ Tψ →T 2
⊗ ⊗

2. Both s{F ϕ → ψ, T ¬ ϕ} and {F ϕ → ψ, T ¬ψ} have closed tableaux:

1. Fϕ → ψ Assumption
2. T¬ ϕ Assumption
3. Tϕ →F 1
4. Fψ →F 1
5. Fϕ ¬T 2

1. Fϕ → ψ Assumption
2. T ¬ψ Assumption
3. Tϕ →F 1
4. Fψ →F 1
5. Fψ ¬T 2

9.8 Soundness
A derivation system, such as tableaux, is sound if it cannot derive things that
do not actually hold. Soundness is thus a kind of guaranteed safety property
for derivation systems. Depending on which proof theoretic property is in
question, we would like to know for instance, that

1. every derivable ϕ is a tautology;

2. if a sentence is derivable from some others, it is also a consequence of


them;

3. if a set of sentences is inconsistent, it is unsatisfiable.

These are important properties of a derivation system. If any of them do not


hold, the derivation system is deficient—it would derive too much. Conse-
quently, establishing the soundness of a derivation system is of the utmost
importance.

Release: (None) ((None)) 105


CHAPTER 9. TABLEAUX

Because all these proof-theoretic properties are defined via closed tableaux
of some kind or other, proving (1)–(3) above requires proving something about
the semantic properties of closed tableaux. We will first define what it means
for a signed formula to be satisfied in a structure, and then show that if a
tableau is closed, no structure satisfies all its assumptions. (1)–(3) then follow
as corollaries from this result.

Definition 9.22. A valuation v satisfies a signed formula T ϕ iff v  ϕ, and it


satisfies F ϕ iff v 2 ϕ. v satisfies a set of signed formulas Γ iff it satisfies every
S ϕ ∈ Γ. Γ is satisfiable if there is a valuation that satisfies it, and unsatisfiable
otherwise.

Theorem 9.23 (Soundness). If Γ has a closed tableau, Γ is unsatisfiable.

Proof. Let’s call a branch of a tableau satisfiable iff the set of signed formulas
on it is satisfiable, and let’s call a tableau satisfiable if it contains at least one
satisfiable branch.
We show the following: Extending a satisfiable tableau by one of the rules
of inference always results in a satisfiable tableau. This will prove the theo-
rem: any closed tableau results by applying rules of inference to the tableau
consisting only of assumptions from Γ. So if Γ were satisfiable, any tableau
for it would be satisfiable. A closed tableau, however, is clearly not satisfiable:
every branch contains both T ϕ and F ϕ, and no structure can both satisfy and
not satisfy ϕ.
Suppose we have a satisfiable tableau, i.e., a tableau with at least one sat-
isfiable branch. Applying a rule of inference either adds signed formulas to a
branch, or splits a branch in two. If the tableau has a satisfiable branch which
is not extended by the rule application in question, it remains a satisfiable
branch in the extended tableau, so the extended tableau is satisfiable. So we
only have to consider the case where a rule is applied to a satisfiable branch.
Let Γ be the set of signed formulas on that branch, and let S ϕ ∈ Γ be the
signed formula to which the rule is applied. If the rule does not result in a
split branch, we have to show that the extended branch, i.e., Γ together with
the conclusions of the rule, is still satisfiable. If the rule results in split branch,
we have to show that at least one of the two resulting branches is satisfiable.
First, we consider the possible inferences with only one premise.

1. The branch is expanded by applying ¬T to T ¬ψ ∈ Γ. Then the ex-


tended branch contains the signed formulas Γ ∪ {F ψ}. Suppose v  Γ.
In particular, v  ¬ψ. Thus, v 2 ψ, i.e., v satisfies F ψ.

2. The branch is expanded by applying ¬F to F ¬ψ ∈ Γ: Exercise.

3. The branch is expanded by applying ∧T to Tψ ∧ χ ∈ Γ, which results


in two new signed formulas on the branch: Tψ and Tχ. Suppose v  Γ,

106 Release: (None) ((None))


9.8. SOUNDNESS

in particular v  ψ ∧ χ. Then v  ψ and v  χ. This means that v satisfies


both Tψ and Tχ.

4. The branch is expanded by applying ∨F to Tψ ∨ χ ∈ Γ: Exercise.

5. The branch is expanded by applying →F to Tψ → χ ∈ Γ: This results in


two new signed formulas on the branch: Tψ and F χ. Suppose v  Γ, in
particular v 2 ψ → χ. Then v  ψ and v 2 χ. This means that v satisfies
both Tψ and F χ.

Now let’s consider the possible inferences with two premises.

1. The branch is expanded by applying ∧F to F ψ ∧ χ ∈ Γ, which results in


two branches, a left one continuing through F ψ and a right one through
F χ. Suppose v  Γ, in particular v 2 ψ ∧ χ. Then v 2 ψ or v 2 χ. In
the former case, v satisfies F ψ, i.e., v satisfies the formulas on the left
branch. In the latter, v satisfies F χ, i.e., v satisfies the formulas on the
right branch.

2. The branch is expanded by applying ∨T to Tψ ∨ χ ∈ Γ: Exercise.

3. The branch is expanded by applying →T to Tψ → χ ∈ Γ: Exercise.

4. The branch is expanded by Cut: This results in two branches, one con-
taining Tψ, the other containing F ψ. Since v  Γ and either v  ψ or
v 2 ψ, v satisfies either the left or the right branch.

Corollary 9.24. If ` ϕ then ϕ is a tautology.

Corollary 9.25. If Γ ` ϕ then Γ  ϕ.

Proof. If Γ ` ϕ then for some ψ1 , . . . , ψn ∈ Γ, {F ϕ, Tψ1 , . . . , Tψn } has a closed


tableau. By ??, every valuation v either makes some ψi false or makes ϕ true.
Hence, if v  Γ then also v  ϕ.

Corollary 9.26. If Γ is satisfiable, then it is consistent.

Proof. We prove the contrapositive. Suppose that Γ is not consistent. Then


there are ψ1 , . . . , ψn ∈ Γ and a closed tableau for {Tψ, . . . , Tψ}. By ??, there is
no v such that v  ψi for all i = 1, . . . , n. But then Γ is not satisfiable.

Release: (None) ((None)) 107


CHAPTER 9. TABLEAUX

Problems
Problem 9.1. Give closed tableaux of the following:

1. F ¬( ϕ → ψ) → ( ϕ ∧ ¬ψ)

2. F ( ϕ → χ) ∨ (ψ → χ), T ( ϕ ∧ ψ) → χ

Problem 9.2. Prove ??

Problem 9.3. Prove that Γ ` ¬ ϕ iff Γ ∪ { ϕ} is inconsistent.

Problem 9.4. Complete the proof of ??.

108 Release: (None) ((None))


Chapter 10

Axiomatic Derivations

No effort has been made yet to ensure that the material in this chap-
ter respects various tags indicating which connectives and quantifiers are
primitive or defined: all are assumed to be primitive. If the FOL tag is
true, we produce a version with quantifiers, otherwise without.

10.1 Rules and Derivations


Axiomatic derivations are perhaps the simplest proof system for logic. A
derivation is just a sequence of formulas. To count as a derivation, every for-
mula in the sequence must either be an instance of an axiom, or must follow
from one or more formulas that precede it in the sequence by a rule of infer-
ence. A derivation derives its last formula.
Definition 10.1 (Derivability). If Γ is a set of formulas of L then a derivation
from Γ is a finite sequence ϕ1 , . . . , ϕn of formulas where for each i ≤ n one of
the following holds:
1. ϕi ∈ Γ; or
2. ϕi is an axiom; or
3. ϕi follows from some ϕ j (and ϕk ) with j < i (and k < i) by a rule of
inference.
What counts as a correct derivation depends on which inference rules we
allow (and of course what we take to be axioms). And an inference rule is an
if-then statement that tells us that, under certain conditions, a step Ai in is a
correct inference step.
Definition 10.2 (Rule of inference). A rule of inference gives a sufficient condi-
tion for what counts as a correct inference step in a derivation from Γ.

109
CHAPTER 10. AXIOMATIC DERIVATIONS

For instance, since any one-element sequence ϕ with ϕ ∈ Γ trivially counts


as a derivation, the following might be a very simple rule of inference:

If ϕ ∈ Γ, then ϕ is always a correct inference step in any derivation


from Γ.

Similarly, if ϕ is one of the axioms, then ϕ by itself is a derivation, and so this


is also a rule of inference:

If ϕ is an axiom, then ϕ is a correct inference step.

It gets more interesting if the rule of inference appeals to formulas that appear
before the step considered. The following rule is called modus ponens:

If ψ → ϕ and ψ occur higher up in the derivation, then ϕ is a correct


inference step.

If this is the only rule of inference, then our definition of derivation above
amounts to this: ϕ1 , . . . , ϕn is a derivation iff for each i ≤ n one of the follow-
ing holds:

1. ϕi ∈ Γ; or

2. ϕi is an axiom; or

3. for some j < i, ϕ j is ψ → ϕi , and for some k < i, ϕk is ψ.

The last clause says that ϕi follows from ϕ j (ψ) and ϕk (ψ → ϕi ) by modus
ponens. If we can go from 1 to n, and each time we find a formula ϕi that is
either in Γ, an axiom, or which a rule of inference tells us that it is a correct
inference step, then the entire sequence counts as a correct derivation.

Definition 10.3 (Derivability). A formula ϕ is derivable from Γ, written Γ ` ϕ,


if there is a derivation from Γ ending in ϕ.

Definition 10.4 (Theorems). A formula ϕ is a theorem if there is a derivation


of ϕ from the empty set. We write ` ϕ if ϕ is a theorem and 0 ϕ if it is not.

110 Release: (None) ((None))


10.2. AXIOM AND RULES FOR THE PROPOSITIONAL CONNECTIVES

10.2 Axiom and Rules for the Propositional Connectives

Definition 10.5 (Axioms). The set of Ax0 of axioms for the propositional con-
nectives comprises all formulas of the following forms:

( ϕ ∧ ψ) → ϕ (10.1)
( ϕ ∧ ψ) → ψ (10.2)
ϕ → (ψ → ( ϕ ∧ ψ)) (10.3)
ϕ → ( ϕ ∨ ψ) (10.4)
ϕ → (ψ ∨ ϕ) (10.5)
( ϕ → χ) → ((ψ → χ) → (( ϕ ∨ ψ) → χ)) (10.6)
ϕ → (ψ → ϕ) (10.7)
( ϕ → (ψ → χ)) → (( ϕ → ψ) → ( ϕ → χ)) (10.8)
( ϕ → ψ) → (( ϕ → ¬ψ) → ¬ ϕ) (10.9)
¬ ϕ → ( ϕ → ψ) (10.10)
> (10.11)
⊥→ϕ (10.12)
( ϕ → ⊥) → ¬ ϕ (10.13)
¬¬ ϕ → ϕ (10.14)

Definition 10.6 (Modus ponens). If ψ and ψ → ϕ already occur in a derivation,


then ϕ is a correct inference step.

We’ll abbreviate the rule modus ponens as “MP.”

10.3 Examples of Derivations

Example 10.7. Suppose we want to prove (¬θ ∨ α) → (θ → α). Clearly, this is


not an instance of any of our axioms, so we have to use the MP rule to derive
it. Our only rule is MP, which given ϕ and ϕ → ψ allows us to justify ψ. One
strategy would be to use ?? with ϕ being ¬θ, ψ being α, and χ being θ → α, i.e.,
the instance

(¬θ → (θ → α)) → ((α → (θ → α)) → ((¬θ ∨ α) → (θ → α))).

Why? Two applications of MP yield the last part, which is what we want.
And we easily see that ¬θ → (θ → α) is an instance of ??, and α → (θ → α) is
an instance of ??. So our derivation is:

Release: (None) ((None)) 111


CHAPTER 10. AXIOMATIC DERIVATIONS

1. ¬θ → (θ → α) ??
2. (¬θ → (θ → α)) →
((α → (θ → α)) → ((¬θ ∨ α) → (θ → α))) ??
3. ((α → (θ → α)) → ((¬θ ∨ α) → (θ → α)) 1, 2, MP
4. α → (θ → α) ??
5. (¬θ ∨ α) → (θ → α) 3, 4, MP

Example 10.8. Let’s try to find a derivation of θ → θ. It is not an instance of


an axiom, so we have to use MP to derive it. ?? is an axiom of the form ϕ → ψ
to which we could apply MP. To be useful, of course, the ψ which MP would
justify as a correct step in this case would have to be θ → θ, since this is what
we want to derive. That means ϕ would also have to be θ, i.e., we might look
at this instance of ??:

θ → (θ → θ )

In order to apply MP, we would also need to justify the corresponding second
premise, namely ϕ. But in our case, that would be θ, and we won’t be able to
derive θ by itself. So we need a different strategy.
The other axiom involving just → is ??, i.e.,

( ϕ → (ψ → χ)) → (( ϕ → ψ) → ( ϕ → χ))

We could get to the last nested conditional by applying MP twice. Again,


that would mean that we want an instance of ?? where ϕ → χ is θ → θ, the
formula we are aiming for. Then of course, ϕ and χ are both θ. How should
we pick ψ so that both ϕ → (ψ → χ) and ϕ → ψ, i.e., in our case θ → (ψ → θ )
and θ → ψ, are also derivable? Well, the first of these is already an instance of
??, whatever we decide ψ to be. And θ → ψ would be another instance of ?? if
ψ were (θ → θ ). So, our derivation is:

1. θ → ((θ → θ ) → θ ) ??
2. (θ → ((θ → θ ) → θ )) →
((θ → (θ → θ )) → (θ → θ )) ??
3. (θ → (θ → θ )) → (θ → θ ) 1, 2, MP
4. θ → (θ → θ ) ??
5. θ→θ 3, 4, MP

Example 10.9. Sometimes we want to show that there is a derivation of some


formula from some other formulas Γ. For instance, let’s show that we can
derive ϕ → χ from Γ = { ϕ → ψ, ψ → χ}.

112 Release: (None) ((None))


10.4. PROOF-THEORETIC NOTIONS

1. ϕ→ψ H YP
2. ψ→χ H YP
3. (ψ → χ) → ( ϕ → (ψ → χ)) ??
4. ϕ → (ψ → χ) 2, 3, MP
5. ( ϕ → (ψ → χ)) →
(( ϕ → ψ) → ( ϕ → χ)) ??
6. (( ϕ → ψ) → ( ϕ → χ)) 4, 5, MP
7. ϕ→χ 1, 6, MP

The lines labelled “H YP” (for “hypothesis”) indicate that the formula on that
line is an element of Γ.

Proposition 10.10. If Γ ` ϕ → ψ and Γ ` ψ → χ, then Γ ` ϕ → χ

Proof. Suppose Γ ` ϕ → ψ and Γ ` ψ → χ. Then there is a derivation of ϕ → ψ


from Γ; and a derivation of ψ → χ from Γ as well. Combine these into a single
derivation by concatenating them. Now add lines 3–7 of the derivation in the
preceding example. This is a derivation of ϕ → χ—which is the last line of the
new derivation—from Γ. Note that the justifications of lines 4 and 7 remain
valid if the reference to line number 2 is replaced by reference to the last line
of the derivation of ϕ → ψ, and reference to line number 1 by reference to the
last line of the derivation of B → χ.

10.4 Proof-Theoretic Notions


Just as we’ve defined a number of important semantic notions (tautology, en-
tailment, satisfiabilty), we now define corresponding proof-theoretic notions.
These are not defined by appeal to satisfaction of sentences in structures, but
by appeal to the derivability or non-derivability of certain formulas. It was an
important discovery that these notions coincide. That they do is the content
of the soundness and completeness theorems.

Definition 10.11 (Derivability). A formula ϕ is derivable from Γ, written Γ ` ϕ,


if there is a derivation from Γ ending in ϕ.

Definition 10.12 (Theorems). A formula ϕ is a theorem if there is a derivation


of ϕ from the empty set. We write ` ϕ if ϕ is a theorem and 0 ϕ if it is not.

Definition 10.13 (Consistency). A set Γ of formulas is consistent if and only if


Γ 0 ⊥; it is inconsistent otherwise.

Proposition 10.14 (Reflexivity). If ϕ ∈ Γ, then Γ ` ϕ.

Proof. The formula ϕ by itself is a derivation of ϕ from Γ.

Proposition 10.15 (Monotony). If Γ ⊆ ∆ and Γ ` ϕ, then ∆ ` ϕ.

Release: (None) ((None)) 113


CHAPTER 10. AXIOMATIC DERIVATIONS

Proof. Any derivation of ϕ from Γ is also a derivation of ϕ from ∆.

Proposition 10.16 (Transitivity). If Γ ` ϕ and { ϕ} ∪ ∆ ` ψ, then Γ ∪ ∆ ` ψ.

Proof. Suppose { ϕ} ∪ ∆ ` ψ. Then there is a derivation ψ1 , . . . , ψl = ψ


from { ϕ} ∪ ∆. Some of the steps in that derivation will be correct because
of a rule which refers to a prior line ψi = ϕ. By hypothesis, there is a deriva-
tion of ϕ from Γ, i.e., a derivation ϕ1 , . . . , ϕk = ϕ where every ϕi is an axiom,
an element of Γ, or correct by a rule of inference. Now consider the sequence

ϕ1 , . . . , ϕk = ϕ, ψ1 , . . . , ψl = ψ.

This is a correct derivation of ψ from Γ ∪ ∆ since every Bi = ϕ is now justified


by the same rule which justifies ϕk = ϕ.

Note that this means that in particular if Γ ` ϕ and ϕ ` ψ, then Γ ` ψ. It


follows also that if ϕ1 , . . . , ϕn ` ψ and Γ ` ϕi for each i, then Γ ` ψ.

Proposition 10.17. Γ is inconsistent iff Γ ` ϕ for every ϕ.

Proof. Exercise.

Proposition 10.18 (Compactness). 1. If Γ ` ϕ then there is a finite subset


Γ0 ⊆ Γ such that Γ0 ` ϕ.

2. If every finite subset of Γ is consistent, then Γ is consistent.

Proof. 1. If Γ ` ϕ, then there is a finite sequence of formulas ϕ1 , . . . , ϕn so


that ϕ ≡ ϕn and each ϕi is either a logical axiom, an element of Γ or
follows from previous formulas by modus ponens. Take Γ0 to be those
ϕi which are in Γ. Then the derivation is likewise a derivation from Γ0 ,
and so Γ0 ` ϕ.

2. This is the contrapositive of (1) for the special case ϕ ≡ ⊥.

10.5 The Deduction Theorem


As we’ve seen, giving derivations in an axiomatic system is cumbersome, and
derivations may be hard to find. Rather than actually write out long lists of
formulas, it is generally easier to argue that such derivations exist, by making
use of a few simple results. We’ve already established three such results: ??
says we can always assert that Γ ` ϕ when we know that ϕ ∈ Γ. ?? says that
if Γ ` ϕ then also Γ ∪ {ψ} ` ϕ. And ?? implies that if Γ ` ϕ and ϕ ` ψ, then
Γ ` ψ. Here’s another simple result, a “meta”-version of modus ponens:

Proposition 10.19. If Γ ` ϕ and Γ ` ϕ → ψ, then Γ ` ψ.

114 Release: (None) ((None))


10.5. THE DEDUCTION THEOREM

Proof. We have that { ϕ, ϕ → ψ} ` ψ:


1. ϕ Hyp.
2. ϕ→ψ Hyp.
3. ψ 1, 2, MP
By ??, Γ ` ψ.

The most important result we’ll use in this context is the deduction theo-
rem:

Theorem 10.20 (Deduction Theorem). Γ ∪ { ϕ} ` ψ if and only if Γ ` ϕ → ψ.

Proof. The “if” direction is immediate. If Γ ` ϕ → ψ then also Γ ∪ { ϕ} ` ϕ → ψ


by ??. Also, Γ ∪ { ϕ} ` ϕ by ??. So, by ??, Γ ∪ { ϕ} ` ψ.
For the “only if” direction, we proceed by induction on the length of the
derivation of ψ from Γ ∪ { ϕ}.
For the induction basis, we prove the claim for every derivation of length 1.
A derivation of ψ from Γ ∪ { ϕ} of length 1 consists of ψ by itself; and if it is
correct ψ is either ∈ Γ ∪ { ϕ} or is an axiom. If ψ ∈ Γ or is an axiom, then
Γ ` ψ. We also have that Γ ` ψ → ( ϕ → ψ) by ??, and ?? gives Γ ` ϕ → ψ.
If ψ ∈ { ϕ} then Γ ` ϕ → ψ because then last sentence ϕ → ψ is the same as
ϕ → ϕ, and we have derived that in ??.
For the inductive step, suppose a derivation of ψ from Γ ∪ { ϕ} ends with
a step ψ which is justified by modus ponens. (If it is not justified by modus
ponens, ψ ∈ Γ, ψ ≡ ϕ, or ψ is an axiom, and the same reasoning as in the
induction basis applies.) Then some previous steps in the derivation are χ → ψ
and χ, for some formula χ, i.e., Γ ∪ { ϕ} ` χ → ψ and Γ ∪ { ϕ} ` χ, and the
respective derivations are shorter, so the inductive hypothesis applies to them.
We thus have both:

Γ ` ϕ → ( χ → ψ );
Γ ` ϕ → χ.

But also
Γ ` ( ϕ → (χ → ψ)) → (( ϕ → χ) → ( ϕ → ψ)),
by ??, and two applications of ?? give Γ ` ϕ → ψ, as required.

Notice how ?? and ?? were chosen precisely so that the Deduction Theorem
would hold.
The following are some useful facts about derivability, which we leave as
exercises.

Proposition 10.21. 1. ` ( ϕ → ψ) → ((ψ → χ) → ( ϕ → χ);

2. If Γ ∪ {¬ ϕ} ` ¬ψ then Γ ∪ {ψ} ` ϕ (Contraposition);

Release: (None) ((None)) 115


CHAPTER 10. AXIOMATIC DERIVATIONS

3. { ϕ, ¬ ϕ} ` ψ (Ex Falso Quodlibet, Explosion);

4. {¬¬ ϕ} ` ϕ (Double Negation Elimination);

5. If Γ ` ¬¬ ϕ then Γ ` ϕ;

10.6 Derivability and Consistency


We will now establish a number of properties of the derivability relation. They
are independently interesting, but each will play a role in the proof of the
completeness theorem.

Proposition 10.22. If Γ ` ϕ and Γ ∪ { ϕ} is inconsistent, then Γ is inconsistent.

Proof. If Γ ∪ { ϕ} is inconsistent, then Γ ∪ { ϕ} ` ⊥. By ??, Γ ` ψ for every


ψ ∈ Γ. Since also Γ ` ϕ by hypothesis, Γ ` ψ for every ψ ∈ Γ ∪ { ϕ}. By ??,
Γ ` ⊥, i.e., Γ is inconsistent.

Proposition 10.23. Γ ` ϕ iff Γ ∪ {¬ ϕ} is inconsistent.

Proof. First suppose Γ ` ϕ. Then Γ ∪ {¬ ϕ} ` ϕ by ??. Γ ∪ {¬ ϕ} ` ¬ ϕ by ??.


We also have ` ¬ ϕ → ( ϕ → ⊥) by ??. So by two applications of ??, we have
Γ ∪ {¬ ϕ} ` ⊥.
Now assume Γ ∪ {¬ ϕ} is inconsistent, i.e., Γ ∪ {¬ ϕ} ` ⊥. By the deduc-
tion theorem, Γ ` ¬ ϕ → ⊥. Γ ` (¬ ϕ → ⊥) → ¬¬ ϕ by ??, so Γ ` ¬¬ ϕ by ??.
Since Γ ` ¬¬ ϕ → ϕ (??), we have Γ ` ϕ by ?? again.

Proposition 10.24. If Γ ` ϕ and ¬ ϕ ∈ Γ, then Γ is inconsistent.

Proof. Γ ` ¬ ϕ → ( ϕ → ⊥) by ??. Γ ` ⊥ by two applications of ??.

Proposition 10.25. If Γ ∪ { ϕ} and Γ ∪ {¬ ϕ} are both inconsistent, then Γ is in-


consistent.

Proof. Exercise.

10.7 Derivability and the Propositional Connectives


Proposition 10.26. 1. Both ϕ ∧ ψ ` ϕ and ϕ ∧ ψ ` ψ

2. ϕ, ψ ` ϕ ∧ ψ.

Proof. 1. From ?? and ?? by modus ponens.

2. From ?? by two applications of modus ponens.

Proposition 10.27. 1. ϕ ∨ ψ, ¬ ϕ, ¬ψ is inconsistent.

116 Release: (None) ((None))


10.8. SOUNDNESS

2. Both ϕ ` ϕ ∨ ψ and ψ ` ϕ ∨ ψ.

Proof. 1. From ?? we get ` ¬ ϕ → ( ϕ → ⊥) and ` ¬ ϕ → ( ϕ → ⊥). So by


the deduction theorem, we have {¬ ϕ} ` ϕ → ⊥ and {¬ψ} ` ψ → ⊥.
From ?? we get {¬ ϕ, ¬ψ} ` ( ϕ ∨ ψ) → ⊥. By the deduction theorem,
{ ϕ ∨ ψ, ¬ ϕ, ¬ψ} ` ⊥.

2. From ?? and ?? by modus ponsens.

Proposition 10.28. 1. ϕ, ϕ → ψ ` ψ.

2. Both ¬ ϕ ` ϕ → ψ and ψ ` ϕ → ψ.

Proof. 1. We can derive:

1. ϕ H YP
2. ϕ→ψ H YP
3. ψ 1, 2, MP

2. By ?? and ?? and the deduction theorem, respectively.

10.8 Soundness
A derivation system, such as axiomatic deduction, is sound if it cannot de-
rive things that do not actually hold. Soundness is thus a kind of guaranteed
safety property for derivation systems. Depending on which proof theoretic
property is in question, we would like to know for instance, that

1. every derivable ϕ is valid;

2. if ϕ is derivable from some others Γ, it is also a consequence of them;

3. if a set of formulas Γ is inconsistent, it is unsatisfiable.

These are important properties of a derivation system. If any of them do not


hold, the derivation system is deficient—it would derive too much. Conse-
quently, establishing the soundness of a derivation system is of the utmost
importance.

Proposition 10.29. If ϕ is an axiom, then v  ϕ for each valuation v.

Proof. Do truth tables for each axiom to verify that they are tautologies.

Theorem 10.30 (Soundness). If Γ ` ϕ then Γ  ϕ.

Release: (None) ((None)) 117


CHAPTER 10. AXIOMATIC DERIVATIONS

Proof. By induction on the length of the derivation of ϕ from Γ. If there are


no steps justified by inferences, then all formulas in the derivation are either
instances of axioms or are in Γ. By the previous proposition, all the axioms
are tautologies, and hence if ϕ is an axiom then Γ  ϕ. If ϕ ∈ Γ, then trivially
Γ  ϕ.
If the last step of the derivation of ϕ is justified by modus ponens, then
there are formulas ψ and ψ → ϕ in the derivation, and the induction hypoth-
esis applies to the part of the derivation ending in those formulas (since they
contain at least one fewer steps justified by an inference). So, by induction
hypothesis, Γ  ψ and Γ  ψ → ϕ. Then Γ  ϕ by ??.

Corollary 10.31. If ` ϕ, then ϕ is a tautology.

Corollary 10.32. If Γ is satisfiable, then it is consistent.

Proof. We prove the contrapositive. Suppose that Γ is not consistent. Then


Γ ` ⊥, i.e., there is a derivation of ⊥ from Γ. By ??, any valuation v that
satisfies Γ must satisfy ⊥. Since v 2 ⊥ for every valuation v, no v can satisfy
Γ, i.e., Γ is not satisfiable.

Problems
Problem 10.1. Show that the following hold by exhibiting derivations from
the axioms:

1. ( ϕ ∧ ψ) → (ψ ∧ ϕ)

2. (( ϕ ∧ ψ) → χ) → ( ϕ → (ψ → χ))

3. ¬( ϕ ∨ ψ) → ¬ ϕ

Problem 10.2. Prove ??.

Problem 10.3. Prove ??

Problem 10.4. Prove that Γ ` ¬ ϕ iff Γ ∪ { ϕ} is inconsistent.

Problem 10.5. Prove ??

118 Release: (None) ((None))


Chapter 11

The Completeness Theorem

11.1 Introduction
The completeness theorem is one of the most fundamental results about logic.
It comes in two formulations, the equivalence of which we’ll prove. In its first
formulation it says something fundamental about the relationship between
semantic consequence and our proof system: if a sentence ϕ follows from
some sentences Γ, then there is also a derivation that establishes Γ ` ϕ. Thus,
the proof system is as strong as it can possibly be without proving things that
don’t actually follow.
In its second formulation, it can be stated as a model existence result: ev-
ery consistent set of sentences is satisfiable. Consistency is a proof-theoretic
notion: it says that our proof system is unable to produce certain derivations.
But who’s to say that just because there are no derivations of a certain sort
from Γ, it’s guaranteed that there is valuation v with v  Γ? Before the com-
pleteness theorem was first proved—in fact before we had the proof systems
we now do—the great German mathematician David Hilbert held the view
that consistency of mathematical theories guarantees the existence of the ob-
jects they are about. He put it as follows in a letter to Gottlob Frege:

If the arbitrarily given axioms do not contradict one another with


all their consequences, then they are true and the things defined by
the axioms exist. This is for me the criterion of truth and existence.

Frege vehemently disagreed. The second formulation of the completeness the-


orem shows that Hilbert was right in at least the sense that if the axioms are
consistent, then some valuation exists that makes them all true.
These aren’t the only reasons the completeness theorem—or rather, its
proof—is important. It has a number of important consequences, some of
which we’ll discuss separately. For instance, since any derivation that shows
Γ ` ϕ is finite and so can only use finitely many of the sentences in Γ, it fol-
lows by the completeness theorem that if ϕ is a consequence of Γ, it is already

119
CHAPTER 11. THE COMPLETENESS THEOREM

a consequence of a finite subset of Γ. This is called compactness. Equivalently,


if every finite subset of Γ is consistent, then Γ itself must be consistent.
Although the compactness theorem follows from the completeness theo-
rem via the detour through derivations, it is also possible to use the the proof
of the completeness theorem to establish it directly. For what the proof does is
take a set of sentences with a certain property—consistency—and constructs
a structure out of this set that has certain properties (in this case, that it satisfies
the set). Almost the very same construction can be used to directly establish
compactness, by starting from “finitely satisfiable” sets of sentences instead of
consistent ones.

11.2 Outline of the Proof


The proof of the completeness theorem is a bit complex, and upon first reading
it, it is easy to get lost. So let us outline the proof. The first step is a shift of
perspective, that allows us to see a route to a proof. When completeness is
thought of as “whenever Γ  ϕ then Γ ` ϕ,” it may be hard to even come up
with an idea: for to show that Γ ` ϕ we have to find a derivation, and it does
not look like the hypothesis that Γ  ϕ helps us for this in any way. For some
proof systems it is possible to directly construct a derivation, but we will take
a slightly different tack. The shift in perspective required is this: completeness
can also be formulated as: “if Γ is consistent, it has a model.” Perhaps we can
use the information in Γ together with the hypothesis that it is consistent to
construct a model. After all, we know what kind of model we are looking for:
one that is as Γ describes it!
If Γ contains only propositional variables, it is easy to construct a model
for it. All we have to do is come up with a valuation v such that v  pa for all
p ∈ Γ. Well, let v( p) = T iff p ∈ Γ.
Now suppose Γ contains some formula ¬ψ, with ψ atomic. We might
worry that the construction of v interferes with the possibility of making ¬ψ
true. But here’s where the consistency of Γ comes in: if ¬ψ ∈ Γ, then ψ ∈ / Γ, or
else Γ would be inconsistent. And if ψ ∈ / Γ, then according to our construction
of v, v 2 ψ, so v  ¬ψ. So far so good.
What if Γ contains complex, non-atomic formulas? Say it contains ϕ ∧ ψ.
To make that true, we should proceed as if both ϕ and ψ were in Γ. And if
ϕ ∨ ψ ∈ Γ, then we will have to make at least one of them true, i.e., proceed
as if one of them was in Γ.
This suggests the following idea: we add additional formulas to Γ so as to
(a) keep the resulting set consistent and (b) make sure that for every possible
atomic sentence ϕ, either ϕ is in the resulting set, or ¬ ϕ is, and (c) such that,
whenever ϕ ∧ ψ is in the set, so are both ϕ and ψ, if ϕ ∨ ψ is in the set, at least
one of ϕ or ψ is also, etc. We keep doing this (potentially forever). Call the set
of all formulas so added Γ ∗ . Then our construction above would provide us

120 Release: (None) ((None))


11.3. COMPLETE CONSISTENT SETS OF SENTENCES

with a structure v for which we could prove, by induction, that all sentences
in Γ ∗ are true in it, and hence also all sentence in Γ since Γ ⊆ Γ ∗ . It turns
out that guaranteeing (a) and (b) is enough. A set of sentences for which (b)
holds is called complete. So our task will be to extend the consistent set Γ to a
consistent and complete set Γ ∗ .
So here’s what we’ll do. First we investigate the properties of complete
consistent sets, in particular we prove that a complete consistent set contains
ϕ ∧ ψ iff it contains both ϕ and ψ, ϕ ∨ ψ iff it contains at least one of them, etc.
(??). We’ll then take the consistent set Γ and show that it can be extended to a
consistent and complete set Γ ∗ (??). This set Γ ∗ is what we’ll use to define our
valuation v( Γ ∗ ). The valuation is determined by the propositional variables
in Γ ∗ (??). We’ll use the properties of complete consistent sets to show that
indeed v( Γ ∗ )  ϕ iff ϕ ∈ Γ ∗ (??), and thus in particular, v( Γ ∗ )  Γ.

11.3 Complete Consistent Sets of Sentences


Definition 11.1 (Complete set). A set Γ of sentences is complete iff for any
sentence ϕ, either ϕ ∈ Γ or ¬ ϕ ∈ Γ.

Complete sets of sentences leave no questions unanswered. For any sen-


tence A, Γ “says” if ϕ is true or false. The importance of complete sets extends
beyond the proof of the completeness theorem. A theory which is complete
and axiomatizable, for instance, is always decidable.
Complete consistent sets are important in the completeness proof since we
can guarantee that every consistent set of sentences Γ is contained in a com-
plete consistent set Γ ∗ . A complete consistent set contains, for each sentence ϕ,
either ϕ or its negation ¬ ϕ, but not both. This is true in particular for atomic
sentences, so from a complete consistent set in a language suitably expanded
by constant symbols, we can construct a structure where the interpretation of
predicate symbols is defined according to which atomic sentences are in Γ ∗ .
This structure can then be shown to make all sentences in Γ ∗ (and hence also
all those in Γ) true. The proof of this latter fact requires that ¬ ϕ ∈ Γ ∗ iff
ϕ∈ / Γ ∗ , ( ϕ ∨ ψ) ∈ Γ ∗ iff ϕ ∈ Γ ∗ or ψ ∈ Γ ∗ , etc.
In what follows, we will often tacitly use the properties of reflexivity, mono-
tonicity, and transitivity of ` (see ??????????????).

Proposition 11.2. Suppose Γ is complete and consistent. Then:

1. If Γ ` ϕ, then ϕ ∈ Γ.

2. ϕ ∧ ψ ∈ Γ iff both ϕ ∈ Γ and ψ ∈ Γ.

3. ϕ ∨ ψ ∈ Γ iff either ϕ ∈ Γ or ψ ∈ Γ.

4. ϕ → ψ ∈ Γ iff either ϕ ∈
/ Γ or ψ ∈ Γ.

Release: (None) ((None)) 121


CHAPTER 11. THE COMPLETENESS THEOREM

Proof. Let us suppose for all of the following that Γ is complete and consistent.

1. If Γ ` ϕ, then ϕ ∈ Γ.
Suppose that Γ ` ϕ. Suppose to the contrary that ϕ ∈ / Γ. Since Γ is
complete, ¬ ϕ ∈ Γ. By ??????????????, Γ is inconsistent. This contradicts
the assumption that Γ is consistent. Hence, it cannot be the case that
ϕ∈/ Γ, so ϕ ∈ Γ.

2. Exercise.

3. First we show that if ϕ ∨ ψ ∈ Γ, then either ϕ ∈ Γ or ψ ∈ Γ. Suppose


ϕ ∨ ψ ∈ Γ but ϕ ∈ / Γ and ψ ∈ / Γ. Since Γ is complete, ¬ ϕ ∈ Γ and
¬ψ ∈ Γ. By ??????????????, item (1), Γ is inconsistent, a contradiction.
Hence, either ϕ ∈ Γ or ψ ∈ Γ.
For the reverse direction, suppose that ϕ ∈ Γ or ψ ∈ Γ. By ??????????????,
item (2), Γ ` ϕ ∨ ψ. By ??, ϕ ∨ ψ ∈ Γ, as required.

4. Exercise.

11.4 Lindenbaum’s Lemma


We now prove a lemma that shows that any consistent set of sentences is con-
tained in some set of sentences which is not just consistent, but also complete.
The proof works by adding one sentence at a time, guaranteeing at each step
that the set remains consistent. We do this so that for every ϕ, either ϕ or ¬ ϕ
gets added at some stage. The union of all stages in that construction then
contains either ϕ or its negation ¬ ϕ and is thus complete. It is also consistent,
since we made sure at each stage not to introduce an inconsistency.

Lemma 11.3 (Lindenbaum’s Lemma). Every consistent set Γ in a language L can


be extended to a complete and consistent set Γ ∗ .

Proof. Let Γ be consistent. Let ϕ0 , ϕ1 , . . . be an enumeration of all the sen-


tences of L. Define Γ0 = Γ, and
(
Γn ∪ { ϕn } if Γn ∪ { ϕn } is consistent;
Γn+1 =
Γn ∪ {¬ ϕn } otherwise.

Let Γ ∗ = n≥0 Γn .
S

Each Γn is consistent: Γ0 is consistent by definition. If Γn+1 = Γn ∪ { ϕn },


this is because the latter is consistent. If it isn’t, Γn+1 = Γn ∪ {¬ ϕn }. We have
to verify that Γn ∪ {¬ ϕn } is consistent. Suppose it’s not. Then both Γn ∪ { ϕn }
and Γn ∪ {¬ ϕn } are inconsistent. This means that Γn would be inconsistent by
??????????????, contrary to the induction hypothesis.

122 Release: (None) ((None))


11.5. CONSTRUCTION OF A MODEL

For every n and every i < n, Γi ⊆ Γn . This follows by a simple induction


on n. For n = 0, there are no i < 0, so the claim holds automatically. For
the inductive step, suppose it is true for n. We have Γn+1 = Γn ∪ { ϕn } or
= Γn ∪ {¬ ϕn } by construction. So Γn ⊆ Γn+1 . If i < n, then Γi ⊆ Γn by
inductive hypothesis, and so ⊆ Γn+1 by transitivity of ⊆.
From this it follows that every finite subset of Γ ∗ is a subset of Γn for
some n, since each ψ ∈ Γ ∗ not already in Γ0 is added at some stage i. If n
is the last one of these, then all ψ in the finite subset are in Γn . So, every finite
subset of Γ ∗ is consistent. By ??????????????, Γ ∗ is consistent.
Every sentence of Frm(L) appears on the list used to define Γ ∗ . If ϕn ∈ / Γ∗ ,
then that is because Γn ∪ { ϕn } was inconsistent. But then ¬ ϕn ∈ Γ , so Γ ∗ is

complete.

11.5 Construction of a Model


We are now ready to define a valuation that makes all ϕ ∈ Γ true. To do this,
we first apply Lindenbaum’s Lemma: we get a complete consistent Γ ∗ ⊇ Γ.
We let the propositional variables in Γ ∗ determine v( Γ ∗ ).

Definition 11.4. Suppose Γ ∗ is a complete consistent set of formulas. Then


we let (
∗ T if p ∈ Γ ∗
v( Γ )( p) =
F if p ∈/ Γ∗

Lemma 11.5 (Truth Lemma). v( Γ ∗ )  ϕ iff ϕ ∈ Γ ∗ .

Proof. We prove both directions simultaneously, and by induction on ϕ.

1. ϕ ≡ ⊥: v( Γ ∗ ) 2 ⊥ by definition of satisfaction. On the other hand,


⊥∈/ Γ ∗ since Γ ∗ is consistent.

2. ϕ ≡ p: v( Γ ∗ )  p iff v( Γ ∗ )( p) = T (by the definition of satisfaction) iff


p ∈ Γ ∗ (by the construction of v( Γ ∗ )).

3. ϕ ≡ ¬ψ: v( Γ ∗ )  ϕ iff M( Γ ∗ ) 2 ψ (by definition of satisfaction). By


induction hypothesis, M( Γ ∗ ) 2 ψ iff ψ ∈
/ Γ ∗ . Since Γ ∗ is consistent and
complete, ψ ∈ ∗
/ Γ iff ¬ψ ∈ Γ .∗

4. ϕ ≡ ψ ∧ χ: exercise.

5. ϕ ≡ ψ ∨ χ: v( Γ ∗ )  ϕ iff at v( Γ ∗ )  ψ or v( Γ ∗ )  χ (by definition of


satisfaction) iff ψ ∈ Γ ∗ or χ ∈ Γ ∗ (by induction hypothesis). This is the
case iff (ψ ∨ χ) ∈ Γ ∗ (by ????).

6. ϕ ≡ ψ → χ: exercise.

Release: (None) ((None)) 123


CHAPTER 11. THE COMPLETENESS THEOREM

11.6 The Completeness Theorem


Let’s combine our results: we arrive at the completeness theorem.

Theorem 11.6 (Completeness Theorem). Let Γ be a set of sentences. If Γ is con-


sistent, it is satisfiable.

Proof. Suppose Γ is consistent. By ??, there is a Γ ∗ ⊇ Γ which is consistent


and complete. By ??, v( Γ ∗ )  ϕ iff ϕ ∈ Γ ∗ . From this it follows in particular
that for all ϕ ∈ Γ, v( Γ ∗ )  ϕ, so Γ is satisfiable.

Corollary 11.7 (Completeness Theorem, Second Version). For all Γ and ϕ sen-
tences: if Γ  ϕ then Γ ` ϕ.

Proof. Note that the Γ’s in ?? and ?? are universally quantified. To make sure
we do not confuse ourselves, let us restate ?? using a different variable: for
any set of sentences ∆, if ∆ is consistent, it is satisfiable. By contraposition, if ∆
is not satisfiable, then ∆ is inconsistent. We will use this to prove the corollary.
Suppose that Γ  ϕ. Then Γ ∪ {¬ ϕ} is unsatisfiable by ??. Taking Γ ∪ {¬ ϕ}
as our ∆, the previous version of ?? gives us that Γ ∪ {¬ ϕ} is inconsistent. By
??????????????, Γ ` ϕ.

11.7 The Compactness Theorem


One important consequence of the completeness theorem is the compactness
theorem. The compactness theorem states that if each finite subset of a set
of sentences is satisfiable, the entire set is satisfiable—even if the set itself is
infinite. This is far from obvious. There is nothing that seems to rule out,
at first glance at least, the possibility of there being infinite sets of sentences
which are contradictory, but the contradiction only arises, so to speak, from
the infinite number. The compactness theorem says that such a scenario can
be ruled out: there are no unsatisfiable infinite sets of sentences each finite
subset of which is satisfiable. Like the completeness theorem, it has a version
related to entailment: if an infinite set of sentences entails something, already
a finite subset does.

Definition 11.8. A set Γ of formulas is finitely satisfiable if and only if every


finite Γ0 ⊆ Γ is satisfiable.

Theorem 11.9 (Compactness Theorem). The following hold for any sentences Γ
and ϕ:

1. Γ  ϕ iff there is a finite Γ0 ⊆ Γ such that Γ0  ϕ.

2. Γ is satisfiable if and only if it is finitely satisfiable.

124 Release: (None) ((None))


11.8. A DIRECT PROOF OF THE COMPACTNESS THEOREM

Proof. We prove (2). If Γ is satisfiable, then there is a valuation v such that


v  ϕ for all ϕ ∈ Γ. Of course, this v also satisfies every finite subset of Γ, so Γ
is finitely satisfiable.
Now suppose that Γ is finitely satisfiable. Then every finite subset Γ0 ⊆ Γ
is satisfiable. By soundness (??????????????), every finite subset is consistent.
Then Γ itself must be consistent by ??????????????. By completeness (??), since
Γ is consistent, it is satisfiable.

11.8 A Direct Proof of the Compactness Theorem


We can prove the Compactness Theorem directly, without appealing to the
Completeness Theorem, using the same ideas as in the proof of the complete-
ness theorem. In the proof of the Completeness Theorem we started with a
consistent set Γ of sentences, expanded it to a consistent and complete set Γ ∗
of sentences, and then showed that in the valuation v( Γ ∗ ) constructed from
Γ ∗ , all sentences of Γ are true, so Γ is satisfiable.
We can use the same method to show that a finitely satisfiable set of sen-
tences is satisfiable. We just have to prove the corresponding versions of
the results leading to the truth lemma where we replace “consistent” with
“finitely satisfiable.”

Proposition 11.10. Suppose Γ is complete and finitely satisfiable. Then:

1. ( ϕ ∧ ψ) ∈ Γ iff both ϕ ∈ Γ and ψ ∈ Γ.

2. ( ϕ ∨ ψ) ∈ Γ iff either ϕ ∈ Γ or ψ ∈ Γ.

3. ( ϕ → ψ) ∈ Γ iff either ϕ ∈
/ Γ or ψ ∈ Γ.

Lemma 11.11. Every finitely satisfiable set Γ can be extended to a complete and
finitely satisfiable set Γ ∗ .

Theorem 11.12 (Compactness). Γ is satisfiable if and only if it is finitely satisfiable.

Proof. If Γ is satisfiable, then there is a valuation v such that pSatvϕ for all
ϕ ∈ Γ. Of course, this v also satisfies every finite subset of Γ, so Γ is finitely
satisfiable.
Now suppose that Γ is finitely satisfiable. By ??, Γ can be extended to
a complete and finitely satisfiable set Γ ∗ . Construct the valuation v( Γ ∗ ) as in
??. The proof of the Truth Lemma (??) goes through if we replace references to
??.

Problems
Problem 11.1. Complete the proof of ??.

Release: (None) ((None)) 125


CHAPTER 11. THE COMPLETENESS THEOREM

Problem 11.2. Complete the proof of ??.

Problem 11.3. Use ?? to prove ??, thus showing that the two formulations of
the completeness theorem are equivalent.

Problem 11.4. In order for a derivation system to be complete, its rules must
be strong enough to prove every unsatisfiable set inconsistent. Which of the
rules of derivation were necessary to prove completeness? Are any of these
rules not used anywhere in the proof? In order to answer these questions,
make a list or diagram that shows which of the rules of derivation were used
in which results that lead up to the proof of ??. Be sure to note any tacit uses
of rules in these proofs.

Problem 11.5. Prove (1) of ??.

Problem 11.6. Prove ??. Avoid the use of `.

Problem 11.7. Prove ??. (Hint: the crucial step is to show that if Γn is finitely
satisfiable, then either Γn ∪ { ϕn } or Γn ∪ {¬ ϕn } is finitely satisfiable.)

Problem 11.8. Write out the complete proof of the Truth Lemma (??) in the
version required for the proof of ??.

126 Release: (None) ((None))


Part III

First-order Logic

127
CHAPTER 11. THE COMPLETENESS THEOREM

This part covers the metatheory of first-order logic through complete-


ness. Currently it does not rely on a separate treatment of propositional
logic; everything is proved. The source files will exclude the material on
quantifiers (and replace “structure” with “valuation”, M with v, etc.) if
the “FOL” tag is false. In fact, most of the material in the part on propo-
sitional logic is simply the first-order material with the “FOL” tag turned
off.
If the part on propositional logic is included, this results in a lot of rep-
etition. It is planned, however, to make it possible to let this part take into
account the material on propositional logic (and exclude the material al-
ready covered, as well as shorten proofs with references to the respective
places in the propositional part).
Currently four different proof systems are offered as alternatives, se-
quent calculus, natural deduction, signed tableaux, and axiomatic proofs.
This part still needs an introduction (issue #69).

128 Release: (None) ((None))


Chapter 12

Syntax and Semantics

12.1 Introduction
In order to develop the theory and metatheory of first-order logic, we must
first define the syntax and semantics of its expressions. The expressions of
first-order logic are terms and formulas. Terms are formed from variables,
constant symbols, and function symbols. Formulas, in turn, are formed from
predicate symbols together with terms (these form the smallest, “atomic” for-
mulas), and then from atomic formulas we can form more complex ones us-
ing logical connectives and quantifiers. There are many different ways to set
down the formation rules; we give just one possible one. Other systems will
chose different symbols, will select different sets of connectives as primitive,
will use parentheses differently (or even not at all, as in the case of so-called
Polish notation). What all approaches have in common, though, is that the
formation rules define the set of terms and formulas inductively. If done prop-
erly, every expression can result essentially in only one way according to the
formation rules. The inductive definition resulting in expressions that are
uniquely readable means we can give meanings to these expressions using the
same method—inductive definition.
Giving the meaning of expressions is the domain of semantics. The central
concept in semantics is that of satisfaction in a structure. A structure gives
meaning to the building blocks of the language: a domain is a non-empty
set of objects. The quantifiers are interpreted as ranging over this domain,
constant symbols are assigned elements in the domain, function symbols are
assigned functions from the domain to itself, and predicate symbols are as-
signed relations on the domain. The domain together with assignments to the
basic vocabulary constitutes a structure. Variables may appear in formulas,
and in order to give a semantics, we also have to assign elements of the do-
main to them—this is a variable assignment. The satisfaction relation, finally,
brings these together. A formula may be satisfied in a structure M relative to a
variable assignment s, written as M, s  ϕ. This relation is also defined by in-

129
CHAPTER 12. SYNTAX AND SEMANTICS

duction on the structure of ϕ, using the truth tables for the logical connectives
to define, say, satisfaction of ϕ ∧ ψ in terms of satisfaction (or not) of ϕ and
ψ. It then turns out that the variable assignment is irrelevant if the formula ϕ
is a sentence, i.e., has no free variables, and so we can talk of sentences being
simply satisfied (or not) in structures.
On the basis of the satisfaction relation M  ϕ for sentences we can then
define the basic semantic notions of validity, entailment, and satisfiability. A
sentence is valid,  ϕ, if every structure satisfies it. It is entailed by a set of
sentences, Γ  ϕ, if every structure that satisfies all the sentences in Γ also
satisfies ϕ. And a set of sentences is satisfiable if some structure satisfies all
sentences in it at the same time. Because formulas are inductively defined,
and satisfaction is in turn defined by induction on the structure of formulas,
we can use induction to prove properties of our semantics and to relate the
semantic notions defined.

12.2 First-Order Languages


Expressions of first-order logic are built up from a basic vocabulary containing
variables, constant symbols, predicate symbols and sometimes function symbols.
From them, together with logical connectives, quantifiers, and punctuation
symbols such as parentheses and commas, terms and formulas are formed.
Informally, predicate symbols are names for properties and relations, con-
stant symbols are names for individual objects, and function symbols are names
for mappings. These, except for the identity predicate =, are the non-logical
symbols and together make up a language. Any first-order language L is de-
termined by its non-logical symbols. In the most general case, L contains
infinitely many symbols of each kind.
In the general case, we make use of the following symbols in first-order
logic:

1. Logical symbols

a) Logical connectives: ¬ (negation), ∧ (conjunction), ∨ (disjunction),


→ (conditional), ∀ (universal quantifier), ∃ (existential quantifier).
b) The propositional constant for falsity ⊥.
c) The two-place identity predicate =.
d) A denumerable set of variables: v0 , v1 , v2 , . . .

2. Non-logical symbols, making up the standard language of first-order logic

a) A denumerable set of n-place predicate symbols for each n > 0: A0n ,


A1n , A2n , . . .
b) A denumerable set of constant symbols: c0 , c1 , c2 , . . . .

130 Release: (None) ((None))


12.2. FIRST-ORDER LANGUAGES

c) A denumerable set of n-place function symbols for each n > 0: f0n ,


f1n , f2n , . . .

3. Punctuation marks: (, ), and the comma.

Most of our definitions and results will be formulated for the full standard
language of first-order logic. However, depending on the application, we may
also restrict the language to only a few predicate symbols, constant symbols,
and function symbols.

Example 12.1. The language L A of arithmetic contains a single two-place


predicate symbol <, a single constant symbol , one one-place function sym-
bol 0, and two two-place function symbols + and ×.

Example 12.2. The language of set theory L Z contains only the single two-
place predicate symbol ∈.

Example 12.3. The language of orders L≤ contains only the two-place predi-
cate symbol ≤.

Again, these are conventions: officially, these are just aliases, e.g., <, ∈,
and ≤ are aliases for A20 ,  for c0 , 0 for f01 , + for f02 , × for f12 .
In addition to the primitive connectives and quantifiers introduced above,
we also use the following defined symbols: ↔ (biconditional), truth >
A defined symbol is not officially part of the language, but is introduced
as an informal abbreviation: it allows us to abbreviate formulas which would,
if we only used primitive symbols, get quite long. This is obviously an ad-
vantage. The bigger advantage, however, is that proofs become shorter. If a
symbol is primitive, it has to be treated separately in proofs. The more primi-
tive symbols, therefore, the longer our proofs.
You may be familiar with different terminology and symbols than the ones
we use above. Logic texts (and teachers) commonly use either ∼, ¬, and ! for
“negation”, ∧, ·, and & for “conjunction”. Commonly used symbols for the
“conditional” or “implication” are →, ⇒, and ⊃. Symbols for “biconditional,”
“bi-implication,” or “(material) equivalence” are ↔, ⇔, and ≡. The ⊥ sym-
bol is variously called “falsity,” “falsum,”, “absurdity,”, or “bottom.” The >
symbol is variously called “truth,” “verum,”, or “top.”
It is conventional to use lower case letters (e.g., a, b, c) from the begin-
ning of the Latin alphabet for constant symbols (sometimes called names),
and lower case letters from the end (e.g., x, y, z) for variables. Quantifiers
combine with variables, e.g., x; notational variations include ∀ x, (∀ x ), ( x ),
Πx, x for the universal quantifier and ∃ x, (∃ x ), ( Ex ), Σx, x for the existen-
V W

tial quantifier.
We might treat all the propositional operators and both quantifiers as prim-
itive symbols of the language. We might instead choose a smaller stock of
primitive symbols and treat the other logical operators as defined. “Truth

Release: (None) ((None)) 131


CHAPTER 12. SYNTAX AND SEMANTICS

functionally complete” sets of Boolean operators include {¬, ∨}, {¬, ∧}, and
{¬, →}—these can be combined with either quantifier for an expressively
complete first-order language.
You may be familiar with two other logical operators: the Sheffer stroke |
(named after Henry Sheffer), and Peirce’s arrow ↓, also known as Quine’s
dagger. When given their usual readings of “nand” and “nor” (respectively),
these operators are truth functionally complete by themselves.

12.3 Terms and Formulas


Once a first-order language L is given, we can define expressions built up
from the basic vocabulary of L. These include in particular terms and formulas.

Definition 12.4 (Terms). The set of terms Trm(L) of L is defined inductively


by:

1. Every variable is a term.

2. Every constant symbol of L is a term.

3. If f is an n-place function symbol and t1 , . . . , tn are terms, then f (t1 , . . . , tn )


is a term.

4. Nothing else is a term.

A term containing no variables is a closed term.

The constant symbols appear in our specification of the language and the
terms as a separate category of symbols, but they could instead have been in-
cluded as zero-place function symbols. We could then do without the second
clause in the definition of terms. We just have to understand f (t1 , . . . , tn ) as
just f by itself if n = 0.

Definition 12.5 (Formula). The set of formulas Frm(L) of the language L is


defined inductively as follows:

1. ⊥ is an atomic formula.

2. If R is an n-place predicate symbol of L and t1 , . . . , tn are terms of L,


then R(t1 , . . . , tn ) is an atomic formula.

3. If t1 and t2 are terms of L, then =(t1 , t2 ) is an atomic formula.

4. If ϕ is a formula, then ¬ ϕ is formula.

5. If ϕ and ψ are formulas, then ( ϕ ∧ ψ) is a formula.

6. If ϕ and ψ are formulas, then ( ϕ ∨ ψ) is a formula.

132 Release: (None) ((None))


12.3. TERMS AND FORMULAS

7. If ϕ and ψ are formulas, then ( ϕ → ψ) is a formula.

8. If ϕ is a formula and x is a variable, then ∀ x ϕ is a formula.

9. If ϕ is a formula and x is a variable, then ∃ x ϕ is a formula.

10. Nothing else is a formula.

The definitions of the set of terms and that of formulas are inductive defini-
tions. Essentially, we construct the set of formulas in infinitely many stages. In
the initial stage, we pronounce all atomic formulas to be formulas; this corre-
sponds to the first few cases of the definition, i.e., the cases for ⊥, R(t1 , . . . , tn )
and =(t1 , t2 ). “Atomic formula” thus means any formula of this form.
The other cases of the definition give rules for constructing new formulas
out of formulas already constructed. At the second stage, we can use them to
construct formulas out of atomic formulas. At the third stage, we construct
new formulas from the atomic formulas and those obtained in the second
stage, and so on. A formula is anything that is eventually constructed at such
a stage, and nothing else.
By convention, we write = between its arguments and leave out the paren-
theses: t1 = t2 is an abbreviation for =(t1 , t2 ). Moreover, ¬=(t1 , t2 ) is abbre-
viated as t1 6= t2 . When writing a formula (ψ ∗ χ) constructed from ψ, χ
using a two-place connective ∗, we will often leave out the outermost pair of
parentheses and write simply ψ ∗ χ.
Some logic texts require that the variable x must occur in ϕ in order for
∃ x ϕ and ∀ x ϕ to count as formulas. Nothing bad happens if you don’t require
this, and it makes things easier.

Definition 12.6. Formulas constructed using the defined operators are to be


understood as follows:

1. > abbreviates ¬⊥.

2. ϕ ↔ ψ abbreviates ( ϕ → ψ) ∧ (ψ → ϕ).

If we work in a language for a specific application, we will often write two-


place predicate symbols and function symbols between the respective terms,
e.g., t1 < t2 and (t1 + t2 ) in the language of arithmetic and t1 ∈ t2 in the
language of set theory. The successor function in the language of arithmetic
is even written conventionally after its argument: t0 . Officially, however, these
are just conventional abbreviations for A20 (t1 , t2 ), f02 (t1 , t2 ), A20 (t1 , t2 ) and f 01 (t),
respectively.

Definition 12.7 (Syntactic identity). The symbol ≡ expresses syntactic identity


between strings of symbols, i.e., ϕ ≡ ψ iff ϕ and ψ are strings of symbols of
the same length and which contain the same symbol in each place.

Release: (None) ((None)) 133


CHAPTER 12. SYNTAX AND SEMANTICS

The ≡ symbol may be flanked by strings obtained by concatenation, e.g.,


ϕ ≡ (ψ ∨ χ) means: the string of symbols ϕ is the same string as the one
obtained by concatenating an opening parenthesis, the string ψ, the ∨ symbol,
the string χ, and a closing parenthesis, in this order. If this is the case, then we
know that the first symbol of ϕ is an opening parenthesis, ϕ contains ψ as a
substring (starting at the second symbol), that substring is followed by ∨, etc.

12.4 Unique Readability


The way we defined formulas guarantees that every formula has a unique read-
ing, i.e., there is essentially only one way of constructing it according to our
formation rules for formulas and only one way of “interpreting” it. If this were
not so, we would have ambiguous formulas, i.e., formulas that have more
than one reading or intepretation—and that is clearly something we want to
avoid. But more importantly, without this property, most of the definitions
and proofs we are going to give will not go through.
Perhaps the best way to make this clear is to see what would happen if we
had given bad rules for forming formulas that would not guarantee unique
readability. For instance, we could have forgotten the parentheses in the for-
mation rules for connectives, e.g., we might have allowed this:

If ϕ and ψ are formulas, then so is ϕ → ψ.

Starting from an atomic formula θ, this would allow us to form θ → θ. From


this, together with θ, we would get θ → θ → θ. But there are two ways to do
this:

1. We take θ to be ϕ and θ → θ to be ψ.

2. We take ϕ to be θ → θ and ψ is θ.

Correspondingly, there are two ways to “read” the formula θ → θ → θ. It is of


the form ψ → χ where ψ is θ and χ is θ → θ, but it is also of the form ψ → χ
with ψ being θ → θ and χ being θ.
If this happens, our definitions will not always work. For instance, when
we define the main operator of a formula, we say: in a formula of the form
ψ → χ, the main operator is the indicated occurrence of →. But if we can match
the formula θ → θ → θ with ψ → χ in the two different ways mentioned above,
then in one case we get the first occurrence of → as the main operator, and in
the second case the second occurrence. But we intend the main operator to
be a function of the formula, i.e., every formula must have exactly one main
operator occurrence.

Lemma 12.8. The number of left and right parentheses in a formula ϕ are equal.

134 Release: (None) ((None))


12.4. UNIQUE READABILITY

Proof. We prove this by induction on the way ϕ is constructed. This requires


two things: (a) We have to prove first that all atomic formulas have the prop-
erty in question (the induction basis). (b) Then we have to prove that when
we construct new formulas out of given formulas, the new formulas have the
property provided the old ones do.
Let l ( ϕ) be the number of left parentheses, and r ( ϕ) the number of right
parentheses in ϕ, and l (t) and r (t) similarly the number of left and right
parentheses in a term t. We leave the proof that for any term t, l (t) = r (t)
as an exercise.

1. ϕ ≡ ⊥: ϕ has 0 left and 0 right parentheses.

2. ϕ ≡ R(t1 , . . . , tn ): l ( ϕ) = 1 + l (t1 ) + · · · + l (tn ) = 1 + r (t1 ) + · · · +


r (tn ) = r ( ϕ). Here we make use of the fact, left as an exercise, that
l (t) = r (t) for any term t.

3. ϕ ≡ t1 = t2 : l ( ϕ) = l (t1 ) + l (t2 ) = r (t1 ) + r (t2 ) = r ( ϕ).

4. ϕ ≡ ¬ψ: By induction hypothesis, l (ψ) = r (ψ). Thus l ( ϕ) = l (ψ) =


r ( ψ ) = r ( ϕ ).

5. ϕ ≡ (ψ ∗ χ): By induction hypothesis, l (ψ) = r (ψ) and l (χ) = r (χ).


Thus l ( ϕ) = 1 + l (ψ) + l (χ) = 1 + r (ψ) + r (χ) = r ( ϕ).

6. ϕ ≡ ∀ x ψ: By induction hypothesis, l (ψ) = r (ψ). Thus, l ( ϕ) = l (ψ) =


r ( ψ ) = r ( ϕ ).

7. ϕ ≡ ∃ x ψ: Similarly.

Definition 12.9 (Proper prefix). A string of symbols ψ is a proper prefix of a


string of symbols ϕ if concatenating ψ and a non-empty string of symbols
yields ϕ.

Lemma 12.10. If ϕ is a formula, and ψ is a proper prefix of ϕ, then ψ is not a formula.

Proof. Exercise.

Proposition 12.11. If ϕ is an atomic formula, then it satisfes one, and only one of
the following conditions.

1. ϕ ≡ ⊥.

2. ϕ ≡ R(t1 , . . . , tn ) where R is an n-place predicate symbol, t1 , . . . , tn are terms,


and each of R, t1 , . . . , tn is uniquely determined.

3. ϕ ≡ t1 = t2 where t1 and t2 are uniquely determined terms.

Release: (None) ((None)) 135


CHAPTER 12. SYNTAX AND SEMANTICS

Proof. Exercise.

Proposition 12.12 (Unique Readability). Every formula satisfies one, and only one
of the following conditions.

1. ϕ is atomic.

2. ϕ is of the form ¬ψ.

3. ϕ is of the form (ψ ∧ χ).

4. ϕ is of the form (ψ ∨ χ).

5. ϕ is of the form (ψ → χ).

6. ϕ is of the form ∀ x ψ.

7. ϕ is of the form ∃ x ψ.

Moreover, in each case ψ, or ψ and χ, are uniquely determined. This means that, e.g.,
there are no different pairs ψ, χ and ψ0 , χ0 so that ϕ is both of the form (ψ → χ) and
( ψ 0 → χ 0 ).

Proof. The formation rules require that if a formula is not atomic, it must start
with an opening parenthesis (, ¬, or with a quantifier. On the other hand,
every formula that start with one of the following symbols must be atomic:
a predicate symbol, a function symbol, a constant symbol, ⊥.
So we really only have to show that if ϕ is of the form (ψ ∗ χ) and also of
the form (ψ0 ∗0 χ0 ), then ψ ≡ ψ0 , χ ≡ χ0 , and ∗ = ∗0 .
So suppose both ϕ ≡ (ψ ∗ χ) and ϕ ≡ (ψ0 ∗0 χ0 ). Then either ψ ≡ ψ0 or not.
If it is, clearly ∗ = ∗0 and χ ≡ χ0 , since they then are substrings of ϕ that begin
in the same place and are of the same length. The other case is ψ 6≡ ψ0 . Since
ψ and ψ0 are both substrings of ϕ that begin at the same place, one must be a
proper prefix of the other. But this is impossible by ??.

12.5 Main operator of a Formula


It is often useful to talk about the last operator used in constructing a for-
mula ϕ. This operator is called the main operator of ϕ. Intuitively, it is the
“outermost” operator of ϕ. For example, the main operator of ¬ ϕ is ¬, the
main operator of ( ϕ ∨ ψ) is ∨, etc.

Definition 12.13 (Main operator). The main operator of a formula ϕ is defined


as follows:

1. ϕ is atomic: ϕ has no main operator.

2. ϕ ≡ ¬ψ: the main operator of ϕ is ¬.

136 Release: (None) ((None))


12.6. SUBFORMULAS

3. ϕ ≡ (ψ ∧ χ): the main operator of ϕ is ∧.


4. ϕ ≡ (ψ ∨ χ): the main operator of ϕ is ∨.
5. ϕ ≡ (ψ → χ): the main operator of ϕ is →.
6. ϕ ≡ ∀ x ψ: the main operator of ϕ is ∀.
7. ϕ ≡ ∃ x ψ: the main operator of ϕ is ∃.
In each case, we intend the specific indicated occurrence of the main oper-
ator in the formula. For instance, since the formula ((θ → α) → (α → θ )) is of
the form (ψ → χ) where ψ is (θ → α) and χ is (α → θ ), the second occurrence
of → is the main operator.
This is a recursive definition of a function which maps all non-atomic for-
mulas to their main operator occurrence. Because of the way formulas are
defined inductively, every formula ϕ satisfies one of the cases in ??. This guar-
antees that for each non-atomic formula ϕ a main operator exists. Because
each formula satisfies only one of these conditions, and because the smaller
formulas from which ϕ is constructed are uniquely determined in each case,
the main operator occurrence of ϕ is unique, and so we have defined a func-
tion.
We call formulas by the following names depending on which symbol their
main operator is:
Main operator Type of formula Example
none atomic (formula) ⊥, R ( t1 , . . . , t n ), t1 = t2
¬ negation ¬ϕ
∧ conjunction ( ϕ ∧ ψ)
∨ disjunction ( ϕ ∨ ψ)
→ conditional ( ϕ → ψ)
∀ universal (formula) ∀x ϕ
∃ existential (formula) ∃x ϕ

12.6 Subformulas
It is often useful to talk about the formulas that “make up” a given formula.
We call these its subformulas. Any formula counts as a subformula of itself; a
subformula of ϕ other than ϕ itself is a proper subformula.
Definition 12.14 (Immediate Subformula). If ϕ is a formula, the immediate sub-
formulas of ϕ are defined inductively as follows:
1. Atomic formulas have no immediate subformulas.
2. ϕ ≡ ¬ψ: The only immediate subformula of ϕ is ψ.
3. ϕ ≡ (ψ ∗ χ): The immediate subformulas of ϕ are ψ and χ (∗ is any one
of the two-place connectives).

Release: (None) ((None)) 137


CHAPTER 12. SYNTAX AND SEMANTICS

4. ϕ ≡ ∀ x ψ: The only immediate subformula of ϕ is ψ.

5. ϕ ≡ ∃ x ψ: The only immediate subformula of ϕ is ψ.

Definition 12.15 (Proper Subformula). If ϕ is a formula, the proper subformulas


of ϕ are recursively as follows:

1. Atomic formulas have no proper subformulas.

2. ϕ ≡ ¬ψ: The proper subformulas of ϕ are ψ together with all proper


subformulas of ψ.

3. ϕ ≡ (ψ ∗ χ): The proper subformulas of ϕ are ψ, χ, together with all


proper subformulas of ψ and those of χ.

4. ϕ ≡ ∀ x ψ: The proper subformulas of ϕ are ψ together with all proper


subformulas of ψ.

5. ϕ ≡ ∃ x ψ: The proper subformulas of ϕ are ψ together with all proper


subformulas of ψ.

Definition 12.16 (Subformula). The subformulas of ϕ are ϕ itself together with


all its proper subformulas.

Note the subtle difference in how we have defined immediate subformulas


and proper subformulas. In the first case, we have directly defined the imme-
diate subformulas of a formula ϕ for each possible form of ϕ. It is an explicit
definition by cases, and the cases mirror the inductive definition of the set of
formulas. In the second case, we have also mirrored the way the set of all
formulas is defined, but in each case we have also included the proper subfor-
mulas of the smaller formulas ψ, χ in addition to these formulas themselves.
This makes the definition recursive. In general, a definition of a function on an
inductively defined set (in our case, formulas) is recursive if the cases in the
definition of the function make use of the function itself. To be well defined,
we must make sure, however, that we only ever use the values of the function
for arguments that come “before” the one we are defining—in our case, when
defining “proper subformula” for (ψ ∗ χ) we only use the proper subformulas
of the “earlier” formulas ψ and χ.

12.7 Free Variables and Sentences


Definition 12.17 (Free occurrences of a variable). The free occurrences of a
variable in a formula are defined inductively as follows:

1. ϕ is atomic: all variable occurrences in ϕ are free.

2. ϕ ≡ ¬ψ: the free variable occurrences of ϕ are exactly those of ψ.

138 Release: (None) ((None))


12.8. SUBSTITUTION

3. ϕ ≡ (ψ ∗ χ): the free variable occurrences of ϕ are those in ψ together


with those in χ.

4. ϕ ≡ ∀ x ψ: the free variable occurrences in ϕ are all of those in ψ except


for occurrences of x.

5. ϕ ≡ ∃ x ψ: the free variable occurrences in ϕ are all of those in ψ except


for occurrences of x.

Definition 12.18 (Bound Variables). An occurrence of a variable in a formula ϕ


is bound if it is not free.

Definition 12.19 (Scope). If ∀ x ψ is an occurrence of a subformula in a for-


mula ϕ, then the corresponding occurrence of ψ in ϕ is called the scope of the
corresponding occurrence of ∀ x. Similarly for ∃ x.
If ψ is the scope of a quantifier occurrence ∀ x or ∃ x in ϕ, then all occur-
rences of x which are free in ψ are said to be bound by the mentioned quantifier
occurrence.

Example 12.20. Consider the following formula:

∃v0 A20 (v0 , v1 )


| {z }
ψ

ψ represents the scope of ∃v0 . The quantifier binds the occurence of v0 in ψ,


but does not bind the occurence of v1 . So v1 is a free variable in this case.
We can now see how this might work in a more complicated formula ϕ:
θ
z }| {
∀v0 (A10 (v0 ) → A20 (v0 , v1 )) →∃v1 (A21 (v0 , v1 ) ∨ ∀v0 ¬A11 (v0 ))
| {z } | {z }
ψ χ

ψ is the scope of the first ∀v0 , χ is the scope of ∃v1 , and θ is the scope of the
second ∀v0 . The first ∀v0 binds the occurrences of v0 in ψ, ∃v1 the occurrence of
v1 in χ, and the second ∀v0 binds the occurrence of v0 in θ. The first occurrence
of v1 and the fourth occurrence of v0 are free in ϕ. The last occurrence of v0 is
free in θ, but bound in χ and ϕ.

Definition 12.21 (Sentence). A formula ϕ is a sentence iff it contains no free


occurrences of variables.

12.8 Substitution
Definition 12.22 (Substitution in a term). We define s[t/x ], the result of sub-
stituting t for every occurrence of x in s, recursively:

1. s ≡ c: s[t/x ] is just s.

Release: (None) ((None)) 139


CHAPTER 12. SYNTAX AND SEMANTICS

2. s ≡ y: s[t/x ] is also just s, provided y is a variable and y 6≡ x.

3. s ≡ x: s[t/x ] is t.

4. s ≡ f (t1 , . . . , tn ): s[t/x ] is f (t1 [t/x ], . . . , tn [t/x ]).

Definition 12.23. A term t is free for x in ϕ if none of the free occurrences of x


in ϕ occur in the scope of a quantifier that binds a variable in t.

Example 12.24.

1. v8 is free for v1 in ∃v3 A24 (v3 , v1 )

2. f12 (v1 , v2 ) is not free for vo in ∀v2 A24 (v0 , v2 )

Definition 12.25 (Substitution in a formula). If ϕ is a formula, x is a variable,


and t is a term free for x in ϕ, then ϕ[t/x ] is the result of substituting t for all
free occurrences of x in ϕ.

1. ϕ ≡ ⊥: ϕ[t/x ] is ⊥.

2. ϕ ≡ P(t1 , . . . , tn ): ϕ[t/x ] is P(t1 [t/x ], . . . , tn [t/x ]).

3. ϕ ≡ t1 = t2 : ϕ[t/x ] is t1 [t/x ] = t2 [t/x ].

4. ϕ ≡ ¬ψ: ϕ[t/x ] is ¬ψ[t/x ].

5. ϕ ≡ (ψ ∧ χ): ϕ[t/x ] is (ψ[t/x ] ∧ χ[t/x ]).

6. ϕ ≡ (ψ ∨ χ): ϕ[t/x ] is (ψ[t/x ] ∨ χ[t/x ]).

7. ϕ ≡ (ψ → χ): ϕ[t/x ] is (ψ[t/x ] → χ[t/x ]).

8. ϕ ≡ ∀y ψ: ϕ[t/x ] is ∀y ψ[t/x ], provided y is a variable other than x;


otherwise ϕ[t/x ] is just ϕ.

9. ϕ ≡ ∃y ψ: ϕ[t/x ] is ∃y ψ[t/x ], provided y is a variable other than x;


otherwise ϕ[t/x ] is just ϕ.

Note that substitution may be vacuous: If x does not occur in ϕ at all, then
ϕ[t/x ] is just ϕ.
The restriction that t must be free for x in ϕ is necessary to exclude cases
like the following. If ϕ ≡ ∃y x < y and t ≡ y, then ϕ[t/x ] would be ∃y y <
y. In this case the free variable y is “captured” by the quantifier ∃y upon
substitution, and that is undesirable. For instance, we would like it to be the
case that whenever ∀ x ψ holds, so does ψ[t/x ]. But consider ∀ x ∃y x < y (here
ψ is ∃y x < y). It is sentence that is true about, e.g., the natural numbers:
for every number x there is a number y greater than it. If we allowed y as a
possible substitution for x, we would end up with ψ[y/x ] ≡ ∃y y < y, which

140 Release: (None) ((None))


12.9. STRUCTURES FOR FIRST-ORDER LANGUAGES

is false. We prevent this by requiring that none of the free variables in t would
end up being bound by a quantifier in ϕ.
We often use the following convention to avoid cumbersume notation: If
ϕ is a formula with a free variable x, we write ϕ( x ) to indicate this. When it is
clear which ϕ and x we have in mind, and t is a term (assumed to be free for
x in ϕ( x )), then we write ϕ(t) as short for ϕ( x )[t/x ].

12.9 Structures for First-order Languages


First-order languages are, by themselves, uninterpreted: the constant symbols,
function symbols, and predicate symbols have no specific meaning attached
to them. Meanings are given by specifying a structure. It specifies the domain,
i.e., the objects which the constant symbols pick out, the function symbols
operate on, and the quantifiers range over. In addition, it specifies which con-
stant symbols pick out which objects, how a function symbol maps objects
to objects, and which objects the predicate symbols apply to. Structures are
the basis for semantic notions in logic, e.g., the notion of consequence, valid-
ity, satisfiablity. They are variously called “structures,” “interpretations,” or
“models” in the literature.

Definition 12.26 (Structures). A structure M, for a language L of first-order


logic consists of the following elements:

1. Domain: a non-empty set, |M|

2. Interpretation of constant symbols: for each constant symbol c of L, an ele-


ment cM ∈ |M|

3. Interpretation of predicate symbols: for each n-place predicate symbol R of


L (other than =), an n-place relation RM ⊆ |M|n
4. Interpretation of function symbols: for each n-place function symbol f of
L, an n-place function f M : |M|n → |M|

Example 12.27. A structure M for the language of arithmetic consists of a


set, an element of |M|, M , as interpretation of the constant symbol , a one-
place function 0M : |M| → |M|, two two-place functions +M and ×M , both
|M|2 → |M|, and a two-place relation <M ⊆ |M|2 .
An obvious example of such a structure is the following:

1. |N| = N

2. N = 0

3. 0N (n) = n + 1 for all n ∈ N

4. +N (n, m) = n + m for all n, m ∈ N

Release: (None) ((None)) 141


CHAPTER 12. SYNTAX AND SEMANTICS

5. ×N (n, m) = n · m for all n, m ∈ N

6. <N = {hn, mi : n ∈ N, m ∈ N, n < m}

The structure N for L A so defined is called the standard model of arithmetic,


because it interprets the non-logical constants of L A exactly how you would
expect.
However, there are many other possible structures for L A . For instance,
we might take as the domain the set Z of integers instead of N, and define the
interpretations of , 0, +, ×, < accordingly. But we can also define structures
for L A which have nothing even remotely to do with numbers.

Example 12.28. A structure M for the language L Z of set theory requires just
a set and a single-two place relation. So technically, e.g., the set of people plus
the relation “x is older than y” could be used as a structure for L Z , as well as
N together with n ≥ m for n, m ∈ N.
A particularly interesting structure for L Z in which the elements of the
domain are actually sets, and the interpretation of ∈ actually is the relation “x
is an element of y” is the structure HF of hereditarily finite sets:

1. |HF| = ∅ ∪ ℘(∅) ∪ ℘(℘(∅)) ∪ ℘(℘(℘(∅))) ∪ . . . ;

2. ∈HF = {h x, yi : x, y ∈ |HF| , x ∈ y}.

The stipulations we make as to what counts as a structure impact our logic.


For example, the choice to prevent empty domains ensures, given the usual
account of satisfaction (or truth) for quantified sentences, that ∃ x ( ϕ( x ) ∨ ¬ ϕ( x ))
is valid—that is, a logical truth. And the stipulation that all constant symbols
must refer to an object in the domain ensures that the existential generaliza-
tion is a sound pattern of inference: ϕ( a), therefore ∃ x ϕ( x ). If we allowed
names to refer outside the domain, or to not refer, then we would be on our
way to a free logic, in which existential generalization requires an additional
premise: ϕ( a) and ∃ x x = a, therefore ∃ x ϕ( x ).

12.10 Covered Structures for First-order Languages


Recall that a term is closed if it contains no variables.

Definition 12.29 (Value of closed terms). If t is a closed term of the language L


and M is a structure for L, the value ValM (t) is defined as follows:

1. If t is just the constant symbol c, then ValM (c) = cM .

2. If t is of the form f (t1 , . . . , tn ), then

ValM (t) = f M (ValM (t1 ), . . . , ValM (tn )).

142 Release: (None) ((None))


12.11. SATISFACTION OF A FORMULA IN A STRUCTURE

Definition 12.30 (Covered structure). A structure is covered if every element


of the domain is the value of some closed term.

Example 12.31. Let L be the language with constant symbols z er o, one, tw o,


. . . , the binary predicate symbol <, and the binary function symbols + and
×. Then a structure M for L is the one with domain |M| = {0, 1, 2, . . .} and
assignments z er o M = 0, one M = 1, tw o M = 2, and so forth. For the binary
relation symbol <, the set <M is the set of all pairs hc1 , c2 i ∈ |M|2 such that
c1 is less than c2 : for example, h1, 3i ∈ <M but h2, 2i ∈ / <M . For the binary
function symbol +, define +M in the usual way—for example, +M (2, 3) maps
to 5, and similarly for the binary function symbol ×. Hence, the value of
f our is just 4, and the value of ×(tw o, +(thr ee, z er o )) (or in infix notation,
tw o × (thr ee + z er o ) ) is

ValM (×(tw o, +(thr ee, z er o )) =


= ×M (ValM (tw o ), ValM (tw o, +(thr ee, z er o )))
= ×M (ValM (tw o ), +M (ValM (thr ee ), ValM (z er o )))
= ×M (tw o M , +M (thr ee M , z er o M ))
= ×M (2, +M (3, 0))
= ×M (2, 3)
=6

12.11 Satisfaction of a Formula in a Structure


The basic notion that relates expressions such as terms and formulas, on the
one hand, and structures on the other, are those of value of a term and satisfac-
tion of a formula. Informally, the value of a term is an element of a structure—
if the term is just a constant, its value is the object assigned to the constant
by the structure, and if it is built up using function symbols, the value is com-
puted from the values of constants and the functions assigned to the functions
in the term. A formula is satisfied in a structure if the interpretation given to
the predicates makes the formula true in the domain of the structure. This
notion of satisfaction is specified inductively: the specification of the struc-
ture directly states when atomic formulas are satisfied, and we define when a
complex formula is satisfied depending on the main connective or quantifier
and whether or not the immediate subformulas are satisfied. The case of the
quantifiers here is a bit tricky, as the immediate subformula of a quantified for-
mula has a free variable, and structures don’t specify the values of variables.
In order to deal with this difficulty, we also introduce variable assignments and
define satisfaction not with respect to a structure alone, but with respect to a
structure plus a variable assignment.

Release: (None) ((None)) 143


CHAPTER 12. SYNTAX AND SEMANTICS

Definition 12.32 (Variable Assignment). A variable assignment s for a struc-


ture M is a function which maps each variable to an element of |M|, i.e.,
s : Var → |M|.

A structure assigns a value to each constant symbol, and a variable assign-


ment to each variable. But we want to use terms built up from them to also
name elements of the domain. For this we define the value of terms induc-
tively. For constant symbols and variables the value is just as the structure or
the variable assignment specifies it; for more complex terms it is computed
recursively using the functions the structure assigns to the function symbols.

Definition 12.33 (Value of Terms). If t is a term of the language L, M is a


structure for L, and s is a variable assignment for M, the value ValM
s ( t ) is
defined as follows:

1. t ≡ c: ValM M
s (t) = c .

2. t ≡ x: ValM
s ( t ) = s ( x ).

3. t ≡ f (t1 , . . . , tn ):

ValM M M M
s ( t ) = f (Vals ( t1 ), . . . , Vals ( tn )).

Definition 12.34 (x-Variant). If s is a variable assignment for a structure M,


then any variable assignment s0 for M which differs from s at most in what it
assigns to x is called an x-variant of s. If s0 is an x-variant of s we write s ∼ x s0 .

Note that an x-variant of an assignment s does not have to assign something


different to x. In fact, every assignment counts as an x-variant of itself.

Definition 12.35 (Satisfaction). Satisfaction of a formula ϕ in a structure M


relative to a variable assignment s, in symbols: M, s  ϕ, is defined recursively
as follows. (We write M, s 2 ϕ to mean “not M, s  ϕ.”)

1. ϕ ≡ ⊥: M, s 2 ϕ.

2. ϕ ≡ R(t1 , . . . , tn ): M, s  ϕ iff hValM M M


s ( t1 ), . . . , Vals ( tn )i ∈ R .

3. ϕ ≡ t1 = t2 : M, s  ϕ iff ValM M
s ( t1 ) = Vals ( t2 ).

4. ϕ ≡ ¬ψ: M, s  ϕ iff M, s 2 ψ.

5. ϕ ≡ (ψ ∧ χ): M, s  ϕ iff M, s  ψ and M, s  χ.

6. ϕ ≡ (ψ ∨ χ): M, s  ϕ iff M, s  ϕ or M, s  ψ (or both).

7. ϕ ≡ (ψ → χ): M, s  ϕ iff M, s 2 ψ or M, s  χ (or both).

8. ϕ ≡ ∀ x ψ: M, s  ϕ iff for every x-variant s0 of s, M, s0  ψ.

144 Release: (None) ((None))


12.11. SATISFACTION OF A FORMULA IN A STRUCTURE

9. ϕ ≡ ∃ x ψ: M, s  ϕ iff there is an x-variant s0 of s so that M, s0  ψ.

The variable assignments are important in the last two clauses. We cannot
define satisfaction of ∀ x ψ( x ) by “for all a ∈ |M|, M  ψ( a).” We cannot define
satisfaction of ∃ x ψ( x ) by “for at least one a ∈ |M|, M  ψ( a).” The reason
is that a is not symbol of the language, and so ψ( a) is not a formula (that is,
ψ[ a/x ] is undefined). We also cannot assume that we have constant symbols
or terms available that name every element of M, since there is nothing in the
definition of structures that requires it. Even in the standard language the set
of constant symbols is denumerable, so if |M| is not enumerable there aren’t
even enough constant symbols to name every object.

Example 12.36. Let ={ a, b, f , R} where a and b are constant symbols, f is a


two-place function symbol, and A is a two-place predicate symbol. Consider
the structure M defined by:

1. |M| = {1, 2, 3, 4}

2. aM = 1

3. bM = 2

4. f M ( x, y) = x + y if x + y ≤ 3 and = 3 otherwise.

5. RM = {h1, 1i, h1, 2i, h2, 3i, h2, 4i}

The function s( x ) = 1 that assigns 1 ∈ |M| to every variable is a variable


assignment for M.
Then

ValM M M M
s ( f ( a, b )) = f (Vals ( a ), Vals ( b )).

Since a and b are constant symbols, ValM


s ( a) = a
M = 1 and ValM ( b ) = bM =
s
2. So

ValM M
s ( f ( a, b )) = f (1, 2) = 1 + 2 = 3.

To compute the value of f ( f ( a, b), a) we have to consider

ValM M M M M
s ( f ( f ( a, b ), a )) = f (Vals ( f ( a, b )), Vals ( a )) = f (3, 1) = 3,

since 3 + 1 > 3. Since s( x ) = 1 and ValM


s ( x ) = s ( x ), we also have

ValM M M M M
s ( f ( f ( a, b ), x )) = f (Vals ( f ( a, b )), Vals ( x )) = f (3, 1) = 3,

An atomic formula R(t1 , t2 ) is satisfied if the tuple of values of its ar-


guments, i.e., hValM M M
s ( t1 ), Vals ( t2 )i, is an element of R . So, e.g., we have

Release: (None) ((None)) 145


CHAPTER 12. SYNTAX AND SEMANTICS

M, s  R(b, f ( a, b)) since hValM (b), ValM ( f ( a, b))i = h2, 3i ∈ RM , but M 2


/ RM [ s ].
R( x, f ( a, b)) since h1, 3i ∈
To determine if a non-atomic formula ϕ is satisfied, you apply the clauses
in the inductive definition that applies to the main connective. For instance,
the main connective in R( a, a) → ( R(b, x ) ∨ R( x, b) is the →, and

M, s  R( a, a) → ( R(b, x ) ∨ R( x, b)) iff


M, s 2 R( a, a) or M, s  R(b, x ) ∨ R( x, b)

Since M, s  R( a, a) (because h1, 1i ∈ RM ) we can’t yet determine the answer


and must first figure out if M, s  R(b, x ) ∨ R( x, b):

M, s  R(b, x ) ∨ R( x, b) iff
M, s  R(b, x ) or M, s  R( x, b)

And this is the case, since M, s  R( x, b) (because h1, 2i ∈ RM ).

Recall that an x-variant of s is a variable assignment that differs from s at


most in what it assigns to x. For every element of |M|, there is an x-variant
of s: s1 ( x ) = 1, s2 ( x ) = 2, s3 ( x ) = 3, s4 (s) = 4, and with si (y) = s(y) = 1 for
all variables y other than x are all the x-variants of s for the structure M. Note,
in particular, that s1 = s is also an x-variant of s, i.e., s is an x-variant of itself.
To determine if an existentially quantified formula ∃ x ϕ( x ) is satisfied, we
have to determine if M, s0  ϕ( x ) for at least one x-variant s0 of s. So,

M, s  ∃ x ( R(b, x ) ∨ R( x, b)),

since M, s1  R(b, x ) ∨ R( x, b) (s3 would also fit the bill). But,

M, s 2 ∃ x ( R(b, x ) ∧ R( x, b))

since for none of the si , M, si  R(b, x ) ∧ R( x, b).


To determine if a universally quantified formula ∀ x ϕ( x ) is satisfied, we
have to determine if M, s0  ϕ( x ) for all x-variants s0 of s. So,

M, s  ∀ x ( R( x, a) → R( a, x )),

since M, si  R( x, a) → R( a, x ) for all si (M, s1  R( a, x ) and M, s j 2 R( a, x ) for


j = 2, 3, and 4). But,

M, s 2 ∀ x ( R( a, x ) → R( x, a))

since M, s2 2 R( a, x ) → R( x, a) (because M, s2  R( a, x ) and M, s2 2 R( x, a)).

146 Release: (None) ((None))


12.12. VARIABLE ASSIGNMENTS

For a more complicated case, consider

∀ x ( R( a, x ) → ∃y R( x, y)).

Since M, s3 2 R( a, x ) and M, s4 2 R( a, x ), the interesting cases where we have


to worry about the consequent of the conditional are only s1 and s2 . Does
M, s1  ∃y R( x, y) hold? It does if there is at least one y-variant s10 of s1 so
that M, s10  R( x, y). In fact, s1 is such a y-variant (s1 ( x ) = 1, s1 (y) = 1, and
h1, 1i ∈ RM ), so the answer is yes. To determine if M, s2  ∃y R( x, y) we have
to look at the y-variants of s2 . Here, s2 itself does not satisfy R( x, y) (s2 ( x ) = 2,
s2 (y) = 1, and h2, 1i ∈ / RM ). However, consider s20 ∼y s2 with s20 (y) = 3.
0
M, s2  R( x, y) since h2, 3i ∈ RM , and so M, s2  ∃y R( x, y). In sum, for every
x-variant si of s, either M, si 2 R( a, x ) (i = 3, 4) or M, si  ∃y R( x, y) (i = 1, 2),
and so
M, s  ∀ x ( R( a, x ) → ∃y R( x, y)).
On the other hand,

M, s 2 ∃ x ( R( a, x ) ∧ ∀y R( x, y)).

The only x-variants si of s with M, si  R( a, x ) are s1 and s2 . But for each,


there is in turn a y-variant si0 ∼y si with si0 (y) = 4 so that M, si0 2 R( x, y) and
so M, si 2 ∀y R( x, y) for i = 1, 2. In sum, none of the x-variants si ∼ x s are
such that M, si  R( a, x ) ∧ ∀y R( x, y).

12.12 Variable Assignments


A variable assignment s provides a value for every variable—and there are
infinitely many of them. This is of course not necessary. We require variable
assignments to assign values to all variables simply because it makes things a
lot easier. The value of a term t, and whether or not a formula ϕ is satisfied
in a structure with respect to s, only depend on the assignments s makes to
the variables in t and the free variables of ϕ. This is the content of the next
two propositions. To make the idea of “depends on” precise, we show that
any two variable assignments that agree on all the variables in t give the same
value, and that ϕ is satisfied relative to one iff it is satisfied relative to the other
if two variable assignments agree on all free variables of ϕ.

Proposition 12.37. If the variables in a term t are among x1 , . . . , xn , and s1 ( xi ) =


s2 ( xi ) for i = 1, . . . , n, then ValM M
s1 ( t ) = Vals2 ( t ).

Proof. By induction on the complexity of t. For the base case, t can be a con-
stant symbol or one one of the variables x1 , . . . , xn . If t = c, then ValMs1 ( t ) =
M M
c = Vals2 (t). If t = xi , s1 ( xi ) = s2 ( xi ) by the hypothesis of the proposition,
and so ValM M
s1 ( t ) = s1 ( xi ) = s2 ( xi ) = Vals2 ( t ).

Release: (None) ((None)) 147


CHAPTER 12. SYNTAX AND SEMANTICS

For the inductive step, assume that t = f (t1 , . . . , tk ) and that the claim
holds for t1 , . . . , tk . Then

ValM M
s1 ( t ) = Vals1 ( f ( t1 , . . . , tk )) =

= f M (ValM M
s1 ( t1 ), . . . , Vals1 ( tk ))

For j = 1, . . . , k, the variables of t j are among x1 , . . . , xn . So by induction


hypothesis, ValM M
s1 ( t j ) = Vals2 ( t j ). So,

ValM M
s1 ( t ) = Vals2 ( f ( t1 , . . . , tk )) =

= f M (ValM M
s1 ( t1 ), . . . , Vals1 ( tk )) =

= f M (ValM M
s2 ( t1 ), . . . , Vals2 ( tk )) =
= ValM M
s2 ( f ( t1 , . . . , tk )) = Vals2 ( t ).

Proposition 12.38. If the free variables in ϕ are among x1 , . . . , xn , and s1 ( xi ) =


s2 ( xi ) for i = 1, . . . , n, then M, s1  ϕ iff M, s2  ϕ.

Proof. We use induction on the complexity of ϕ. For the base case, where ϕ is
atomic, ϕ can be: ⊥, R(t1 , . . . , tk ) for a k-place predicate R and terms t1 , . . . , tk ,
or t1 = t2 for terms t1 and t2 .

1. ϕ ≡ ⊥: both M, s1 2 ϕ and M, s2 2 ϕ.

2. ϕ ≡ R(t1 , . . . , tk ): let M, s1  ϕ. Then

hValM M M
s1 ( t1 ), . . . , Vals1 ( tk )i ∈ R .

For i = 1, . . . , k, ValM M M M
s1 ( ti ) = Vals2 ( ti ) by ??. So we also have hVals2 ( ti ), . . . , Vals2 ( tk )i ∈
RM .

3. ϕ ≡ t1 = t2 : suppose M, s1  ϕ. Then ValM M


s1 ( t1 ) = Vals1 ( t2 ). So,

ValM M
s2 ( t1 ) = Vals1 ( t1 ) (by ??)
= ValM
s1 ( t 2 ) (since M, s1  t1 = t2 )
= ValM
s2 ( t 2 ) (by ??),

so M, s2  t1 = t2 .

Now assume M, s1  ψ iff M, s2  ψ for all formulas ψ less complex than ϕ.


The induction step proceeds by cases determined by the main operator of ϕ.
In each case, we only demonstrate the forward direction of the biconditional;
the proof of the reverse direction is symmetrical. In all cases except those for

148 Release: (None) ((None))


12.12. VARIABLE ASSIGNMENTS

the quantifiers, we apply the induction hypothesis to sub-formulas ψ of ϕ.


The free variables of ψ are among those of ϕ. Thus, if s1 and s2 agree on the
free variables of ϕ, they also agree on those of ψ, and the induction hypothesis
applies to ψ.

1. ϕ ≡ ¬ψ: if M, s1  ϕ, then M, s1 2 ψ, so by the induction hypothesis,


M, s2 2 ψ, hence M, s2  ϕ.

2. ϕ ≡ ψ ∧ χ: exercise.

3. ϕ ≡ ψ ∨ χ: if M, s1  ϕ, then M, s1  ψ or M, s1  χ. By induction
hypothesis, M, s2  ψ or M, s2  χ, so M, s2  ϕ.

4. ϕ ≡ ψ → χ: exercise.

5. ϕ ≡ ∃ x ψ: if M, s1  ϕ, there is an x-variant s10 of s1 so that M, s10  ψ.


Let s20 be the x-variant of s2 that assigns the same thing to x as does
s10 . The free variables of ψ are among x1 , . . . , xn , and x. s10 ( xi ) = s20 ( xi ),
since s10 and s20 are x-variants of s1 and s2 , respectively, and by hypothesis
s1 ( xi ) = s2 ( xi ). s10 ( x ) = s20 ( x ) by the way we have defined s20 . Then the
induction hypothesis applies to ψ and s10 , s20 , so M, s20  ψ. Hence, there
is an x-variant of s2 that satisfies ψ, and so M, s2  ϕ.

6. ϕ ≡ ∀ x ψ: exercise.

By induction, we get that M, s1  ϕ iff M, s2  ϕ whenever the free variables


in ϕ are among x1 , . . . , xn and s1 ( xi ) = s2 ( xi ) for i = 1, . . . , n.

Sentences have no free variables, so any two variable assignments assign


the same things to all the (zero) free variables of any sentence. The proposition
just proved then means that whether or not a sentence is satisfied in a structure
relative to a variable assignment is completely independent of the assignment.
We’ll record this fact. It justifies the definition of satisfaction of a sentence in
a structure (without mentioning a variable assignment) that follows.

Corollary 12.39. If ϕ is a sentence and s a variable assignment, then M, s  ϕ iff


M, s0  ϕ for every variable assignment s0 .

Proof. Let s0 be any variable assignment. Since ϕ is a sentence, it has no free


variables, and so every variable assignment s0 trivially assigns the same things
to all free variables of ϕ as does s. So the condition of ?? is satisfied, and we
have M, s  ϕ iff M, s0  ϕ.

Definition 12.40. If ϕ is a sentence, we say that a structure M satisfies ϕ, M 


ϕ, iff M, s  ϕ for all variable assignments s.

If M  ϕ, we also simply say that ϕ is true in M.

Release: (None) ((None)) 149


CHAPTER 12. SYNTAX AND SEMANTICS

Proposition 12.41. Let M be a structure, ϕ be a sentence, and s a variable assign-


ment. M  ϕ iff M, s  ϕ.

Proof. Exercise.

Proposition 12.42. Suppose ϕ( x ) only contains x free, and M is a structure. Then:

1. M  ∃ x ϕ( x ) iff M, s  ϕ( x ) for at least one variable assignment s.

2. M  ∀ x ϕ( x ) iff M, s  ϕ( x ) for all variable assignments s.

Proof. Exercise.

12.13 Extensionality
Extensionality, sometimes called relevance, can be expressed informally as fol-
lows: the only factors that bears upon the satisfaction of formula ϕ in a struc-
ture M relative to a variable assignment s, are the size of the domain and the
assignments made by M and s to the elements of the language that actually
appear in ϕ.
One immediate consequence of extensionality is that where two struc-
tures M and M0 agree on all the elements of the language appearing in a sen-
tence ϕ and have the same domain, M and M0 must also agree on whether or
not ϕ itself is true.

Proposition 12.43 (Extensionality). Let ϕ be a formula, and M1 and M2 be struc-


tures with |M1 | = |M2 |, and s a variable assignment on |M1 | = |M2 |. If cM1 =
cM2 , RM1 = RM2 , and f M1 = f M2 for every constant symbol c, relation symbol R,
and function symbol f occurring in ϕ, then M1 , s  ϕ iff M2 , s  ϕ.
M
Proof. First prove (by induction on t) that for every term, Vals 1 (t) = ValM
s ( t ).
2

Then prove the proposition by induction on ϕ, making use of the claim just
proved for the induction basis (where ϕ is atomic).

Corollary 12.44 (Extensionality for Sentences). Let ϕ be a sentence and M1 , M2


as in ??. Then M1  ϕ iff M2  ϕ.

Proof. Follows from ?? by ??.

Moreover, the value of a term, and whether or not a structure satisfies


a formula, only depends on the values of its subterms.

Proposition 12.45. Let M be a structure, t and t0 terms, and s a variable assignment.


Let s0 ∼ x s be the x-variant of s given by s0 ( x ) = ValM 0 M 0
s ( t ). Then Vals ( t [ t /x ]) =
M
Vals0 (t).

Proof. By induction on t.

150 Release: (None) ((None))


12.14. SEMANTIC NOTIONS

1. If t is a constant, say, t ≡ c, then t[t0 /x ] = c, and ValM


s (c) = c
M =
M
Vals0 (c).

2. If t is a variable other than x, say, t ≡ y, then t[t0 /x ] = y, and ValM


s (y) =
ValM s 0 ( y ) since s 0 ∼ s.
x

3. If t ≡ x, then t[t0 /x ] = t0 . But ValM M 0 0


s0 ( x ) = Vals ( t ) by definition of s .

4. If t ≡ f (t1 , . . . , tn ) then we have:

0
ValM
s ( t [ t /x ]) =
0 0
= ValM
s ( f ( t1 [ t /x ], . . . , tn [ t /x ]))
by definition of t[t0 /x ]
0 0
= f M (ValM M
s ( t1 [ t /x ]), . . . , Vals ( tn [ t /x ]))
by definition of ValM
s ( f ( . . . ))
= f M (ValM M
s0 ( t1 ), . . . , Vals0 ( tn ))
by induction hypothesis
= ValM M
s0 ( t ) by definition of Vals0 ( f ( . . . ))

Proposition 12.46. Let M be a structure, ϕ a formula, t a term, and s a variable


assignment. Let s0 ∼ x s be the x-variant of s given by s0 ( x ) = ValM
s ( t ). Then
M, s  ϕ[t/x ] iff M, s0  ϕ.

Proof. Exercise.

12.14 Semantic Notions


Give the definition of structures for first-order languages, we can define some
basic semantic properties of and relationships between sentences. The sim-
plest of these is the notion of validity of a sentence. A sentence is valid if it is
satisfied in every structure. Valid sentences are those that are satisfied regard-
less of how the non-logical symbols in it are interpreted. Valid sentences are
therefore also called logical truths—they are true, i.e., satisfied, in any struc-
ture and hence their truth depends only on the logical symbols occurring in
them and their syntactic structure, but not on the non-logical symbols or their
interpretation.

Definition 12.47 (Validity). A sentence ϕ is valid,  ϕ, iff M  ϕ for every


structure M.

Definition 12.48 (Entailment). A set of sentences Γ entails a sentence ϕ, Γ  ϕ,


iff for every structure M with M  Γ, M  ϕ.

Release: (None) ((None)) 151


CHAPTER 12. SYNTAX AND SEMANTICS

Definition 12.49 (Satisfiability). A set of sentences Γ is satisfiable if M  Γ for


some structure M. If Γ is not satisfiable it is called unsatisfiable.

Proposition 12.50. A sentence ϕ is valid iff Γ  ϕ for every set of sentences Γ.

Proof. For the forward direction, let ϕ be valid, and let Γ be a set of sentences.
Let M be a structure so that M  Γ. Since ϕ is valid, M  ϕ, hence Γ  ϕ.
For the contrapositive of the reverse direction, let ϕ be invalid, so there is
a structure M with M 2 ϕ. When Γ = {>}, since > is valid, M  Γ. Hence,
there is a structure M so that M  Γ but M 2 ϕ, hence Γ does not entail ϕ.

Proposition 12.51. Γ  ϕ iff Γ ∪ {¬ ϕ} is unsatisfiable.

Proof. For the forward direction, suppose Γ  ϕ and suppose to the contrary
that there is a structure M so that M  Γ ∪ {¬ ϕ}. Since M  Γ and Γ  ϕ,
M  ϕ. Also, since M  Γ ∪ {¬ ϕ}, M  ¬ ϕ, so we have both M  ϕ and
M 2 ϕ, a contradiction. Hence, there can be no such structure M, so Γ ∪ { ϕ}
is unsatisfiable.
For the reverse direction, suppose Γ ∪ {¬ ϕ} is unsatisfiable. So for every
structure M, either M 2 Γ or M  ϕ. Hence, for every structure M with
M  Γ, M  ϕ, so Γ  ϕ.

Proposition 12.52. If Γ ⊆ Γ 0 and Γ  ϕ, then Γ 0  ϕ.

Proof. Suppose that Γ ⊆ Γ 0 and Γ  ϕ. Let M be such that M  Γ 0 ; then


M  Γ, and since Γ  ϕ, we get that M  ϕ. Hence, whenever M  Γ 0 , M  ϕ,
so Γ 0  ϕ.

Theorem 12.53 (Semantic Deduction Theorem). Γ ∪ { ϕ}  ψ iff Γ  ϕ → ψ.

Proof. For the forward direction, let Γ ∪ { ϕ}  ψ and let M be a structure so


that M  Γ. If M  ϕ, then M  Γ ∪ { ϕ}, so since Γ ∪ { ϕ} entails ψ, we get
M  ψ. Therefore, M  ϕ → ψ, so Γ  ϕ → ψ.
For the reverse direction, let Γ  ϕ → ψ and M be a structure so that M 
Γ ∪ { ϕ}. Then M  Γ, so M  ϕ → ψ, and since M  ϕ, M  ψ. Hence,
whenever M  Γ ∪ { ϕ}, M  ψ, so Γ ∪ { ϕ}  ψ.

Proposition 12.54. Let M be a structure, and ϕ( x ) a formula with one free vari-
able x, and t a closed term. Then:

1. ϕ(t)  ∃ x ϕ( x )

2. ∀ x ϕ( x )  ϕ(t)

Proof. 1. Suppose M  ϕ(t). Let s be a variable assignment with s( x ) =


ValM (t). Then M, s  ϕ(t) since ϕ(t) is a sentence. By ??, M, s  ϕ( x ).
By ??, M  ∃ x ϕ( x ).

152 Release: (None) ((None))


12.14. SEMANTIC NOTIONS

2. Exercise.

Problems
Problem 12.1. Prove ??.

Problem 12.2. Prove ?? (Hint: Formulate and prove a version of ?? for terms.)

Problem 12.3. Give an inductive definition of the bound variable occurrences


along the lines of ??.

Problem 12.4. Is N, the standard model of arithmetic, covered? Explain.

Problem 12.5. Let L = {c, f , A} with one constant symbol, one one-place
function symbol and one two-place predicate symbol, and let the structure M
be given by

1. |M| = {1, 2, 3}

2. cM = 3

3. f M (1) = 2, f M (2) = 3, f M (3) = 2

4. AM = {h1, 2i, h2, 3i, h3, 3i}

(a) Let s(v) = 1 for all variables v. Find out whether

M, s  ∃ x ( A( f (z), c) → ∀y ( A(y, x ) ∨ A( f (y), x )))

Explain why or why not.


(b) Give a different structure and variable assignment in which the formula
is not satisfied.

Problem 12.6. Complete the proof of ??.

Problem 12.7. Prove ??

Problem 12.8. Prove ??.

Problem 12.9. Suppose L is a language without function symbols. Given a


structure M, c a constant symbol and a ∈ |M|, define M[ a/c] to be the struc-
ture that is just like M, except that cM[ a/c] = a. Define M ||= ϕ for sentences ϕ
by:

1. ϕ ≡ ⊥: not M ||= ϕ.

2. ϕ ≡ R(d1 , . . . , dn ): M ||= ϕ iff hdM M M


1 , . . . , dn i ∈ R .

3. ϕ ≡ d1 = d2 : M ||= ϕ iff dM M
1 = d2 .

Release: (None) ((None)) 153


CHAPTER 12. SYNTAX AND SEMANTICS

4. ϕ ≡ ¬ψ: M ||= ϕ iff not M ||= ψ.

5. ϕ ≡ (ψ ∧ χ): M ||= ϕ iff M ||= ψ and M ||= χ.

6. ϕ ≡ (ψ ∨ χ): M ||= ϕ iff M ||= ψ or M ||= χ (or both).

7. ϕ ≡ (ψ → χ): M ||= ϕ iff not M ||= ψ or M ||= χ (or both).

8. ϕ ≡ ∀ x ψ: M ||= ϕ iff for all a ∈ |M|, M[ a/c] ||= ψ[c/x ], if c does not
occur in ψ.

9. ϕ ≡ ∃ x ψ: M ||= ϕ iff there is an a ∈ |M| such that M[ a/c] ||= ψ[c/x ],


if c does not occur in ψ.

Let x1 , . . . , xn be all free variables in ϕ, c1 , . . . , cn constant symbols not in ϕ,


a1 , . . . , an ∈ |M|, and s( xi ) = ai .
Show that M, s  ϕ iff M[ a1 /c1 , . . . , an /cn ] ||= ϕ[c1 /x1 ] . . . [cn /xn ].
(This problem shows that it is possible to give a semantics for first-order
logic that makes do without variable assignments.)

Problem 12.10. Suppose that f is a function symbol not in ϕ( x, y). Show that
there is a structure M such that M  ∀ x ∃y ϕ( x, y) iff there is an M0 such that
M0  ∀ x ϕ( x, f ( x )).
(This problem is a special case of what’s known as Skolem’s Theorem;
∀ x ϕ( x, f ( x )) is called a Skolem normal form of ∀ x ∃y ϕ( x, y).)
Problem 12.11. Carry out the proof of ?? in detail.

Problem 12.12. Prove ??

Problem 12.13. 1. Show that Γ  ⊥ iff Γ is unsatisfiable.

2. Show that Γ ∪ { ϕ}  ⊥ iff Γ  ¬ ϕ.

3. Suppose c does not occur in ϕ or Γ. Show that Γ  ∀ x ϕ iff Γ  ϕ[c/x ].

Problem 12.14. Complete the proof of ??.

154 Release: (None) ((None))


Chapter 13

Theories and Their Models

13.1 Introduction
The development of the axiomatic method is a significant achievement in the
history of science, and is of special importance in the history of mathemat-
ics. An axiomatic development of a field involves the clarification of many
questions: What is the field about? What are the most fundamental concepts?
How are they related? Can all the concepts of the field be defined in terms of
these fundamental concepts? What laws do, and must, these concepts obey?
The axiomatic method and logic were made for each other. Formal logic
provides the tools for formulating axiomatic theories, for proving theorems
from the axioms of the theory in a precisely specified way, for studying the
properties of all systems satisfying the axioms in a systematic way.

Definition 13.1. A set of sentences Γ is closed iff, whenever Γ  ϕ then ϕ ∈ Γ.


The closure of a set of sentences Γ is { ϕ : Γ  ϕ}.
We say that Γ is axiomatized by a set of sentences ∆ if Γ is the closure of ∆

We can think of an axiomatic theory as the set of sentences that is axiom-


atized by its set of axioms ∆. In other words, when we have a first-order lan-
guage which contains non-logical symbols for the primitives of the axiomat-
ically developed science we wish to study, together with a set of sentences
that express the fundamental laws of the science, we can think of the theory
as represented by all the sentences in this language that are entailed by the
axioms. This ranges from simple examples with only a single primitive and
simple axioms, such as the theory of partial orders, to complex theories such
as Newtonian mechanics.
The important logical facts that make this formal approach to the axiomatic
method so important are the following. Suppose Γ is an axiom system for a
theory, i.e., a set of sentences.

155
CHAPTER 13. THEORIES AND THEIR MODELS

1. We can state precisely when an axiom system captures an intended class


of structures. That is, if we are interested in a certain class of structures,
we will successfully capture that class by an axiom system Γ iff the struc-
tures are exactly those M such that M  Γ.

2. We may fail in this respect because there are M such that M  Γ, but M
is not one of the structures we intend. This may lead us to add axioms
which are not true in M.

3. If we are successful at least in the respect that Γ is true in all the intended
structures, then a sentence ϕ is true in all intended structures whenever
Γ  ϕ. Thus we can use logical tools (such as proof methods) to show
that sentences are true in all intended structures simply by showing that
they are entailed by the axioms.

4. Sometimes we don’t have intended structures in mind, but instead start


from the axioms themselves: we begin with some primitives that we
want to satisfy certain laws which we codify in an axiom system. One
thing that we would like to verify right away is that the axioms do not
contradict each other: if they do, there can be no concepts that obey
these laws, and we have tried to set up an incoherent theory. We can
verify that this doesn’t happen by finding a model of Γ. And if there are
models of our theory, we can use logical methods to investigate them,
and we can also use logical methods to construct models.

5. The independence of the axioms is likewise an important question. It


may happen that one of the axioms is actually a consequence of the oth-
ers, and so is redundant. We can prove that an axiom ϕ in Γ is redundant
by proving Γ \ { ϕ}  ϕ. We can also prove that an axiom is not redun-
dant by showing that ( Γ \ { ϕ}) ∪ {¬ ϕ} is satisfiable. For instance, this is
how it was shown that the parallel postulate is independent of the other
axioms of geometry.

6. Another important question is that of definability of concepts in a the-


ory: The choice of the language determines what the models of a theory
consists of. But not every aspect of a theory must be represented sep-
arately in its models. For instance, every ordering ≤ determines a cor-
responding strict ordering <—given one, we can define the other. So it
is not necessary that a model of a theory involving such an order must
also contain the corresponding strict ordering. When is it the case, in
general, that one relation can be defined in terms of others? When is it
impossible to define a relation in terms of other (and hence must add it
to the primitives of the language)?

156 Release: (None) ((None))


13.2. EXPRESSING PROPERTIES OF STRUCTURES

13.2 Expressing Properties of Structures


It is often useful and important to express conditions on functions and rela-
tions, or more generally, that the functions and relations in a structure satisfy
these conditions. For instance, we would like to have ways of distinguishing
those structures for a language which “capture” what we want the predicate
symbols to “mean” from those that do not. Of course we’re completely free
to specify which structures we “intend,” e.g., we can specify that the inter-
pretation of the predicate symbol ≤ must be an ordering, or that we are only
interested in interpretations of L in which the domain consists of sets and ∈
is interpreted by the “is an element of” relation. But can we do this with sen-
tences of the language? In other words, which conditions on a structure M can
we express by a sentence (or perhaps a set of sentences) in the language of M?
There are some conditions that we will not be able to express. For instance,
there is no sentence of L A which is only true in a structure M if |M| = N.
We cannot express “the domain contains only natural numbers.” But there
are “structural properties” of structures that we perhaps can express. Which
properties of structures can we express by sentences? Or, to put it another
way, which collections of structures can we describe as those making a sen-
tence (or set of sentences) true?
Definition 13.2 (Model of a set). Let Γ be a set of sentences in a language L.
We say that a structure M is a model of Γ if M  ϕ for all ϕ ∈ Γ.
Example 13.3. The sentence ∀ x x ≤ x is true in M iff ≤M is a reflexive relation.
The sentence ∀ x ∀y (( x ≤ y ∧ y ≤ x ) → x = y) is true in M iff ≤M is anti-
symmetric. The sentence ∀ x ∀y ∀z (( x ≤ y ∧ y ≤ z) → x ≤ z) is true in M iff
≤M is transitive. Thus, the models of
{ ∀ x x ≤ x,
∀ x ∀y (( x ≤ y ∧ y ≤ x ) → x = y),
∀ x ∀y ∀z (( x ≤ y ∧ y ≤ z) → x ≤ z) }
are exactly those structures in which ≤M is reflexive, anti-symmetric, and
transitive, i.e., a partial order. Hence, we can take them as axioms for the
first-order theory of partial orders.

13.3 Examples of First-Order Theories


Example 13.4. The theory of strict linear orders in the language L< is axiom-
atized by the set
∀ x ¬ x < x,
∀ x ∀y (( x < y ∨ y < x ) ∨ x = y),
∀ x ∀y ∀z (( x < y ∧ y < z) → x < z)

Release: (None) ((None)) 157


CHAPTER 13. THEORIES AND THEIR MODELS

It completely captures the intended structures: every strict linear order is a


model of this axiom system, and vice versa, if R is a linear order on a set X,
then the structure M with |M| = X and <M = R is a model of this theory.
Example 13.5. The theory of groups in the language  (constant symbol), ·
(two-place function symbol) is axiomatized by

∀ x ( x · ) = x
∀ x ∀y ∀z ( x · (y · z)) = (( x · y) · z)
∀ x ∃y ( x · y) = 
Example 13.6. The theory of Peano arithmetic is axiomatized by the following
sentences in the language of arithmetic L A .

¬∃ x x 0 = 
∀ x ∀y ( x 0 = y0 → x = y)
∀ x ∀y ( x < y ↔ ∃z ( x + z0 = y))
∀ x ( x + ) = x
∀ x ∀y ( x + y0 ) = ( x + y)0
∀ x ( x × ) = 
∀ x ∀y ( x × y0 ) = (( x × y) + x )

plus all sentences of the form

( ϕ() ∧ ∀ x ( ϕ( x ) → ϕ( x 0 ))) → ∀ x ϕ( x )
Since there are infinitely many sentences of the latter form, this axiom sys-
tem is infinite. The latter form is called the induction schema. (Actually, the
induction schema is a bit more complicated than we let on here.)
The third axiom is an explicit definition of <.
Example 13.7. The theory of pure sets plays an important role in the founda-
tions (and in the philosophy) of mathematics. A set is pure if all its elements
are also pure sets. The empty set counts therefore as pure, but a set that has
something as an element that is not a set would not be pure. So the pure sets
are those that are formed just from the empty set and no “urelements,” i.e.,
objects that are not themselves sets.
The following might be considered as an axiom system for a theory of pure
sets:

∃ x ¬∃y y ∈ x
∀ x ∀y (∀z(z ∈ x ↔ z ∈ y) → x = y)
∀ x ∀y ∃z ∀u (u ∈ z ↔ (u = x ∨ u = y))
∀ x ∃y ∀z (z ∈ y ↔ ∃u (z ∈ u ∧ u ∈ x ))

158 Release: (None) ((None))


13.3. EXAMPLES OF FIRST-ORDER THEORIES

plus all sentences of the form

∃ x ∀y (y ∈ x ↔ ϕ(y))
The first axiom says that there is a set with no elements (i.e., ∅ exists); the
second says that sets are extensional; the third that for any sets X and Y, the
set { X, Y } exists; the fourth that for any sets X and Y, the set X ∪ Y exists.
The sentences mentioned last are collectively called the naive comprehension
scheme. It essentially says that for every ϕ( x ), the set { x : ϕ( x )} exists—so
at first glance a true, useful, and perhaps even necessary axiom. It is called
“naive” because, as it turns out, it makes this theory unsatisfiable: if you take
ϕ(y) to be ¬y ∈ y, you get the sentence
∃ x ∀y (y ∈ x ↔ ¬y ∈ y)
and this sentence is not satisfied in any structure.
Example 13.8. In the area of mereology, the relation of parthood is a funda-
mental relation. Just like theories of sets, there are theories of parthood that
axiomatize various conceptions (sometimes conflicting) of this relation.
The language of mereology contains a single two-place predicate sym-
bol P , and P ( x, y) “means” that x is a part of y. When we have this inter-
pretation in mind, a structure for this language is called a parthood structure.
Of course, not every structure for a single two-place predicate will really de-
serve this name. To have a chance of capturing “parthood,” P M must satisfy
some conditions, which we can lay down as axioms for a theory of parthood.
For instance, parthood is a partial order on objects: every object is a part (al-
beit an improper part) of itself; no two different objects can be parts of each
other; a part of a part of an object is itself part of that object. Note that in this
sense “is a part of” resembles “is a subset of,” but does not resemble “is an
element of” which is neither reflexive nor transitive.
∀ x P ( x, x ),
∀ x ∀y ((P ( x, y) ∧ P (y, x )) → x = y),
∀ x ∀y ∀z ((P ( x, y) ∧ P (y, z)) → P ( x, z)),

Moreover, any two objects have a mereological sum (an object that has these
two objects as parts, and is minimal in this respect).

∀ x ∀y ∃z ∀u (P (z, u) ↔ (P ( x, u) ∧ P (y, u)))


These are only some of the basic principles of parthood considered by meta-
physicians. Further principles, however, quickly become hard to formulate or
write down without first introducting some defined relations. For instance,
most metaphysicians interested in mereology also view the following as a
valid principle: whenever an object x has a proper part y, it also has a part z
that has no parts in common with y, and so that the fusion of y and z is x.

Release: (None) ((None)) 159


CHAPTER 13. THEORIES AND THEIR MODELS

13.4 Expressing Relations in a Structure


One main use formulas can be put to is to express properties and relations in
a structure M in terms of the primitives of the language L of M. By this we
mean the following: the domain of M is a set of objects. The constant symbols,
function symbols, and predicate symbols are interpreted in M by some objects
in|M|, functions on |M|, and relations on |M|. For instance, if A20 is in L, then
M
M assigns to it a relation R = A20 . Then the formula A20 (v1 , v2 ) expresses that
very relation, in the following sense: if a variable assignment s maps v1 to
a ∈ |M| and v2 to b ∈ |M|, then

Rab iff M, s  A20 (v1 , v2 ).

Note that we have to involve variable assignments here: we can’t just say “Rab
iff M  A20 ( a, b)” because a and b are not symbols of our language: they are
elements of |M|.
Since we don’t just have atomic formulas, but can combine them using
the logical connectives and the quantifiers, more complex formulas can define
other relations which aren’t directly built into M. We’re interested in how to
do that, and specifically, which relations we can define in a structure.

Definition 13.9. Let ϕ(v1 , . . . , vn ) be a formula of L in which only v1 ,. . . , vn


occur free, and let M be a structure for L. ϕ(v1 , . . . , vn ) expresses the relation R ⊆
|M|n iff
Ra1 . . . an iff M, s  ϕ(v1 , . . . , vn )

for any variable assignment s with s(vi ) = ai (i = 1, . . . , n).

Example 13.10. In the standard model of arithmetic N, the formula v1 <


v2 ∨ v1 = v2 expresses the ≤ relation on N. The formula v2 = v10 expresses
the successor relation, i.e., the relation R ⊆ N2 where Rnm holds if m is the
successor of n. The formula v1 = v20 expresses the predecessor relation. The
formulas ∃v3 (v3 6=  ∧ v2 = (v1 + v3 )) and ∃v3 (v1 + v3 0 ) = v2 both express
the < relation. This means that the predicate symbol < is actually superfluous
in the language of arithmetic; it can be defined.

This idea is not just interesting in specific structures, but generally when-
ever we use a language to describe an intended model or models, i.e., when
we consider theories. These theories often only contain a few predicate sym-
bols as basic symbols, but in the domain they are used to describe often many
other relations play an important role. If these other relations can be system-
atically expressed by the relations that interpret the basic predicate symbols
of the language, we say we can define them in the language.

160 Release: (None) ((None))


13.5. THE THEORY OF SETS

13.5 The Theory of Sets


Almost all of mathematics can be developed in the theory of sets. Developing
mathematics in this theory involves a number of things. First, it requires a set
of axioms for the relation ∈. A number of different axiom systems have been
developed, sometimes with conflicting properties of ∈. The axiom system
known as ZFC, Zermelo-Fraenkel set theory with the axiom of choice stands
out: it is by far the most widely used and studied, because it turns out that its
axioms suffice to prove almost all the things mathematicians expect to be able
to prove. But before that can be established, it first is necessary to make clear
how we can even express all the things mathematicians would like to express.
For starters, the language contains no constant symbols or function symbols,
so it seems at first glance unclear that we can talk about particular sets (such as
∅ or N), can talk about operations on sets (such as X ∪ Y and ℘( X )), let alone
other constructions which involve things other than sets, such as relations and
functions.
To begin with, “is an element of” is not the only relation we are interested
in: “is a subset of” seems almost as important. But we can define “is a subset
of” in terms of “is an element of.” To do this, we have to find a formula ϕ( x, y)
in the language of set theory which is satisfied by a pair of sets h X, Y i iff X ⊆
Y. But X is a subset of Y just in case all elements of X are also elements of Y.
So we can define ⊆ by the formula

∀z (z ∈ x → z ∈ y)

Now, whenever we want to use the relation ⊆ in a formula, we could instead


use that formula (with x and y suitably replaced, and the bound variable z
renamed if necessary). For instance, extensionality of sets means that if any
sets x and y are contained in each other, then x and y must be the same set.
This can be expressed by ∀ x ∀y (( x ⊆ y ∧ y ⊆ x ) → x = y), or, if we replace ⊆
by the above definition, by

∀ x ∀y ((∀z (z ∈ x → z ∈ y) ∧ ∀z (z ∈ y → z ∈ x )) → x = y).

This is in fact one of the axioms of ZFC, the “axiom of extensionality.”


There is no constant symbol for ∅, but we can express “x is empty” by
¬∃y y ∈ x. Then “∅ exists” becomes the sentence ∃ x ¬∃y y ∈ x. This is an-
other axiom of ZFC. (Note that the axiom of extensionality implies that there
is only one empty set.) Whenever we want to talk about ∅ in the language of
set theory, we would write this as “there is a set that’s empty and . . . ” As an
example, to express the fact that ∅ is a subset of every set, we could write

∃ x (¬∃y y ∈ x ∧ ∀z x ⊆ z)

where, of course, x ⊆ z would in turn have to be replaced by its definition.

Release: (None) ((None)) 161


CHAPTER 13. THEORIES AND THEIR MODELS

To talk about operations on sets, such has X ∪ Y and ℘( X ), we have to use


a similar trick. There are no function symbols in the language of set theory,
but we can express the functional relations X ∪ Y = Z and ℘( X ) = Y by

∀u ((u ∈ x ∨ u ∈ y) ↔ u ∈ z)
∀u (u ⊆ x ↔ u ∈ y)
since the elements of X ∪ Y are exactly the sets that are either elements of X or
elements of Y, and the elements of ℘( X ) are exactly the subsets of X. However,
this doesn’t allow us to use x ∪ y or ℘( x ) as if they were terms: we can only
use the entire formulas that define the relations X ∪ Y = Z and ℘( X ) = Y.
In fact, we do not know that these relations are ever satisfied, i.e., we do not
know that unions and power sets always exist. For instance, the sentence
∀ x ∃y ℘( x ) = y is another axiom of ZFC (the power set axiom).
Now what about talk of ordered pairs or functions? Here we have to ex-
plain how we can think of ordered pairs and functions as special kinds of sets.
One way to define the ordered pair h x, yi is as the set {{ x }, { x, y}}. But like
before, we cannot introduce a function symbol that names this set; we can
only define the relation h x, yi = z, i.e., {{ x }, { x, y}} = z:

∀u (u ∈ z ↔ (∀v (v ∈ u ↔ v = x ) ∨ ∀v (v ∈ u ↔ (v = x ∨ v = y))))
This says that the elements u of z are exactly those sets which either have x
as its only element or have x and y as its only elements (in other words, those
sets that are either identical to { x } or identical to { x, y}). Once we have this,
we can say further things, e.g., that X × Y = Z:

∀z (z ∈ Z ↔ ∃ x ∃y ( x ∈ X ∧ y ∈ Y ∧ h x, yi = z))
A function f : X → Y can be thought of as the relation f ( x ) = y, i.e., as
the set of pairs {h x, yi : f ( x ) = y}. We can then say that a set f is a function
from X to Y if (a) it is a relation ⊆ X × Y, (b) it is total, i.e., for all x ∈ X
there is some y ∈ Y such that h x, yi ∈ f and (c) it is functional, i.e., whenever
h x, yi, h x, y0 i ∈ f , y = y0 (because values of functions must be unique). So “ f
is a function from X to Y” can be written as:

∀u (u ∈ f → ∃ x ∃y ( x ∈ X ∧ y ∈ Y ∧ h x, yi = u)) ∧
∀ x ( x ∈ X → (∃y (y ∈ Y ∧ maps( f , x, y)) ∧
(∀y ∀y0 ((maps( f , x, y) ∧ maps( f , x, y0 )) → y = y0 )))
where maps( f , x, y) abbreviates ∃v (v ∈ f ∧ h x, yi = v) (this formula ex-
presses “ f ( x ) = y”).
It is now also not hard to express that f : X → Y is injective, for instance:

f : X → Y ∧ ∀ x ∀ x 0 (( x ∈ X ∧ x 0 ∈ X ∧
∃y (maps( f , x, y) ∧ maps( f , x 0 , y))) → x = x 0 )

162 Release: (None) ((None))


13.6. EXPRESSING THE SIZE OF STRUCTURES

A function f : X → Y is injective iff, whenever f maps x, x 0 ∈ X to a single y,


x = x 0 . If we abbreviate this formula as inj( f , X, Y ), we’re already in a position
to state in the language of set theory something as non-trivial as Cantor’s
theorem: there is no injective function from ℘( X ) to X:

∀ X ∀Y (℘( X ) = Y → ¬∃ f inj( f , Y, X ))

One might think that set theory requires another axiom that guarantees
the existence of a set for every defining property. If ϕ( x ) is a formula of set
theory with the variable x free, we can consider the sentence

∃y ∀ x ( x ∈ y ↔ ϕ( x )).

This sentence states that there is a set y whose elements are all and only those
x that satisfy ϕ( x ). This schema is called the “comprehension principle.” It
looks very useful; unfortunately it is inconsistent. Take ϕ( x ) ≡ ¬ x ∈ x, then
the comprehension principle states

∃y ∀ x ( x ∈ y ↔ x ∈
/ x ),

i.e., it states the existence of a set of all sets that are not elements of them-
selves. No such set can exist—this is Russell’s Paradox. ZFC, in fact, contains
a restricted—and consistent—version of this principle, the separation princi-
ple:
∀z ∃y ∀ x ( x ∈ y ↔ ( x ∈ z ∧ ϕ( x )).

13.6 Expressing the Size of Structures


There are some properties of structures we can express even without using
the non-logical symbols of a language. For instance, there are sentences which
are true in a structure iff the domain of the structure has at least, at most, or
exactly a certain number n of elements.

Proposition 13.11. The sentence

ϕ ≥ n ≡ ∃ x1 ∃ x2 . . . ∃ x n ( x1 6 = x2 ∧ x1 6 = x3 ∧ x1 6 = x4 ∧ · · · ∧ x1 6 = x n ∧
x2 6 = x3 ∧ x2 6 = x4 ∧ · · · ∧ x2 6 = x n ∧
..
.
x n −1 6 = x n )

is true in a structure M iff |M| contains at least n elements. Consequently, M 


¬ ϕ≥n+1 iff |M| contains at most n elements.

Release: (None) ((None)) 163


CHAPTER 13. THEORIES AND THEIR MODELS

Proposition 13.12. The sentence

ϕ = n ≡ ∃ x1 ∃ x2 . . . ∃ x n ( x1 6 = x2 ∧ x1 6 = x3 ∧ x1 6 = x4 ∧ · · · ∧ x1 6 = x n ∧
x2 6 = x3 ∧ x2 6 = x4 ∧ · · · ∧ x2 6 = x n ∧
..
.
x n −1 6 = x n ∧
∀y (y = x1 ∨ . . . y = xn ) . . . ))

is true in a structure M iff |M| contains exactly n elements.

Proposition 13.13. A structure is infinite iff it is a model of

{ ϕ ≥1 , ϕ ≥2 , ϕ ≥3 , . . . }

There is no single purely logical sentence which is true in M iff |M| is


infinite. However, one can give sentences with non-logical predicate symbols
which only have infinite models (although not every infinite structure is a
model of them). The property of being a finite structure, and the property of
being a non-enumerable structure cannot even be expressed with an infinite
set of sentences. These facts follow from the compactness and Löwenheim-
Skolem theorems.

Problems
Problem 13.1. Find formulas in L A which define the following relations:

1. n is between i and j;

2. n evenly divides m (i.e., m is a multiple of n);

3. n is a prime number (i.e., no number other than 1 and n evenly di-


vides n).

Problem 13.2. Suppose the formula ϕ(v1 , v2 ) expresses the relation R ⊆ |M|2
in a structure M. Find formulas that express the following relations:

1. the inverse R−1 of R;

2. the relative product R | R;

Can you find a way to express R+ , the transitive closure of R?

Problem 13.3. Let L be the language containing a 2-place predicate symbol


< only (no other constant symbols, function symbols or predicate symbols—
except of course =). Let N be the structure such that |N| = N, and <N =
{hn, mi : n < m}. Prove the following:

164 Release: (None) ((None))


13.6. EXPRESSING THE SIZE OF STRUCTURES

1. {0} is definable in N;

2. {1} is definable in N;

3. {2} is definable in N;

4. for each n ∈ N, the set {n} is definable in N;

5. every finite subset of |N| is definable in N;

6. every co-finite subset of |N| is definable in N (where X ⊆ N is co-finite


iff N \ X is finite).

Problem 13.4. Show that the comprehension principle is inconsistent by giv-


ing a derivation that shows

∃y ∀ x ( x ∈ y ↔ x ∈
/ x ) ` ⊥.

It may help to first show ( A → ¬ A) ∧ (¬ A → A) ` ⊥.

Release: (None) ((None)) 165


Chapter 14

Derivation Systems

This chapter collects general material on derivation systems. A text-


book using a specific system can insert the introduction section plus the
relevant survey section at the beginning of the chapter introducing that
system.

14.1 Introduction
Logics commonly have both a semantics and a derivation system. The seman-
tics concerns concepts such as truth, satisfiability, validity, and entailment.
The purpose of derivation systems is to provide a purely syntactic method
of establishing entailment and validity. They are purely syntactic in the sense
that a derivation in such a system is a finite syntactic object, usually a sequence
(or other finite arrangement) of sentences or formulas. Good derivation sys-
tems have the property that any given sequence or arrangement of sentences
or formulas can be verified mechanically to be “correct.”
The simplest (and historically first) derivation systems for first-order logic
were axiomatic. A sequence of formulas counts as a derivation in such a sys-
tem if each individual formula in it is either among a fixed set of “axioms”
or follows from formulas coming before it in the sequence by one of a fixed
number of “inference rules”—and it can be mechanically verified if a formula
is an axiom and whether it follows correctly from other formulas by one of
the inference rules. Axiomatic proof systems are easy to describe—and also
easy to handle meta-theoretically—but derivations in them are hard to read
and understand, and are also hard to produce.
Other derivation systems have been developed with the aim of making it
easier to construct derivations or easier to understand derivations once they
are complete. Examples are natural deduction, truth trees, also known as
tableaux proofs, and the sequent calculus. Some derivation systems are de-

166
14.1. INTRODUCTION

signed especially with mechanization in mind, e.g., the resolution method is


easy to implement in software (but its derivations are essentially impossible to
understand). Most of these other proof systems represent derivations as trees
of formulas rather than sequences. This makes it easier to see which parts of
a derivation depend on which other parts.
So for a given logic, such as first-order logic, the different derivation sys-
tems will give different explications of what it is for a sentence to be a theorem
and what it means for a sentence to be derivable from some others. However
that is done (via axiomatic derivations, natural deductions, sequent deriva-
tions, truth trees, resolution refutations), we want these relations to match the
semantic notions of validity and entailment. Let’s write ` ϕ for “ϕ is a the-
orem” and “Γ ` ϕ” for “ϕ is derivable from Γ.” However ` is defined, we
want it to match up with , that is:

1. ` ϕ if and only if  ϕ

2. Γ ` ϕ if and only if Γ  ϕ

The “only if” direction of the above is called soundness. A derivation system is
sound if derivability guarantees entailment (or validity). Every decent deriva-
tion system has to be sound; unsound derivation systems are not useful at all.
After all, the entire purpose of a derivation is to provide a syntactic guarantee
of validity or entailment. We’ll prove soundness for the derivation systems
we present.
The converse “if” direction is also important: it is called completeness. A
complete derivation system is strong enough to show that ϕ is a theorem
whenever ϕ is valid, and that there Γ ` ϕ whenever Γ  ϕ. Completeness
is harder to establish, and some logics have no complete derivation systems.
First-order logic does. Kurt Gödel was the first one to prove completeness for
a derivation system of first-order logic in his 1929 dissertation.
Another concept that is connected to derivation systems is that of consis-
tency. A set of sentences is called inconsistent if anything whatsoever can be
derived from it, and consistent otherwise. Inconsistency is the syntactic coun-
terpart to unsatisfiablity: like unsatisfiable sets, inconsistent sets of sentences
do not make good theories, they are defective in a fundamental way. Con-
sistent sets of sentences may not be true or useful, but at least they pass that
minimal threshold of logical usefulness. For different derivation systems the
specific definition of consistency of sets of sentences might differ, but like `,
we want consistency to coincide with its semantic counterpart, satisfiability.
We want it to always be the case that Γ is consistent if and only if it is satis-
fiable. Here, the “if” direction amounts to completeness (consistency guaran-
tees satisfiability), and the “only if” direction amounts to soundness (satisfi-
ability guarantees consistency). In fact, for classical first-order logic, the two
versions of soundness and completeness are equivalent.

Release: (None) ((None)) 167


CHAPTER 14. DERIVATION SYSTEMS

14.2 The Sequent Calculus


While many derivation systems operate with arrangements of sentences, the
sequent calculus operates with sequents. A sequent is an expression of the
form
ϕ1 , . . . , ϕm ⇒ ψ1 , . . . , ψm ,
that is a pair of sequences of sentences, separated by the sequent symbol ⇒.
Either sequence may be empty. A derivation in the sequent calculus is a tree
of sequents, where the topmost sequents are of a special form (they are called
“initial sequents” or “axioms”) and every other sequent follows from the se-
quents immediately above it by one of the rules of inference. The rules of
inference either manipulate the sentences in the sequents (adding, removing,
or rearranging them on either the left or the right), or they introduce a com-
plex formula in the conclusion of the rule. For instance, the ∧L rule allows the
inference from ϕ, Γ ⇒ ∆ to A ∧ ψ, Γ ⇒ ∆, and the →R allows the inference
from ϕ, Γ ⇒ ∆, ψ to Γ ⇒ ∆, ϕ → ψ, for any Γ, ∆, ϕ, and ψ. (In particular, Γ
and ∆ may be empty.)
The ` relation based on the sequent calculus is defined as follows: Γ ` ϕ
iff there is some sequence Γ0 such that every ϕ in Γ0 is in Γ and there is a
derivation with the sequent Γ0 ⇒ ϕ at its root. ϕ is a theorem in the sequent
calculus if the sequent ⇒ ϕ has a derivation. For instance, here is a derivation
that shows that ` ( ϕ ∧ ψ) → ϕ:

ϕ ⇒ ϕ
ϕ∧ψ ⇒ ϕ
∧L
→R
⇒ ( ϕ ∧ ψ) → ϕ

A set Γ is inconsistent in the sequent calculus if there is a derivation of


Γ0 ⇒ (where every ϕ ∈ Γ0 is in Γ and the right side of the sequent is empty).
Using the rule WR, any sentence can be derived from an inconsistent set.
The sequent calculus was invented in the 1930s by Gerhard Gentzen. Be-
cause of its systematic and symmetric design, it is a very useful formalism for
developing a theory of derivations. It is relatively easy to find derivations in
the sequent calculus, but these derivations are often hard to read and their
connection to proofs are sometimes not easy to see. It has proved to be a very
elegant approach to derivation systems, however, and many logics have se-
quent calculus systems.

14.3 Natural Deduction


Natural deduction is a derivation system intended to mirror actual reasoning
(especially the kind of regimented reasoning employed by mathematicians).
Actual reasoning proceeds by a number of “natural” patterns. For instance,

168 Release: (None) ((None))


14.3. NATURAL DEDUCTION

proof by cases allows us to establish a conclusion on the basis of a disjunc-


tive premise, by establishing that the conclusion follows from either of the
disjuncts. Indirect proof allows us to establish a conclusion by showing that
its negation leads to a contradiction. Conditional proof establishes a condi-
tional claim “if . . . then . . . ” by showing that the consequent follows from
the antecedent. Natural deduction is a formalization of some of these nat-
ural inferences. Each of the logical connectives and quantifiers comes with
two rules, an introduction and an elimination rule, and they each correspond
to one such natural inference pattern. For instance, →Intro corresponds to
conditional proof, and ∨Elim to proof by cases. A particularly simple rule is
∧Elim which allows the inference from ϕ ∧ ψ to ϕ (or ψ).
One feature that distinguishes natural deduction from other derivation
systems is its use of assumptions. A derivation in natural deduction is a tree
of formulas. A single formula stands at the root of the tree of formulas, and
the “leaves” of the tree are formulas from which the conclusion is derived.
In natural deduction, some leaf formulas play a role inside the derivation but
are “used up” by the time the derivation reaches the conclusion. This corre-
sponds to the practice, in actual reasoning, of introducing hypotheses which
only remain in effect for a short while. For instance, in a proof by cases, we
assume the truth of each of the disjuncts; in conditional proof, we assume the
truth of the antecedent; in indirect proof, we assume the truth of the nega-
tion of the conclusion. This way of introducing hypothetical assumptions
and then doing away with them in the service of establishing an intermedi-
ate step is a hallmark of natural deduction. The formulas at the leaves of a
natural deduction derivation are called assumptions, and some of the rules of
inference may “discharge” them. For instance, if we have a derivation of ψ
from some assumptions which include ϕ, then the →Intro rule allows us to
infer ϕ → ψ and discharge any assumption of the form ϕ. (To keep track of
which assumptions are discharged at which inferences, we label the inference
and the assumptions it discharges with a number.) The assumptions that re-
main undischarged at the end of the derivation are together sufficient for the
truth of the conclusion, and so a derivation establishes that its undischarged
assumptions entail its conclusion.
The relation Γ ` ϕ based on natural deduction holds iff there is a deriva-
tion in which ϕ is the last sentence in the tree, and every leaf which is undis-
charged is in Γ. ϕ is a theorem in natural deduction iff there is a derivation in
which ϕ is the last sentence and all assumptions are discharged. For instance,
here is a derivation that shows that ` ( ϕ ∧ ψ) → ϕ:

[ ϕ ∧ ψ ]1
ϕ ∧Elim
1 →Intro
( ϕ ∧ ψ) → ϕ

Release: (None) ((None)) 169


CHAPTER 14. DERIVATION SYSTEMS

The label 1 indicates that the assumption ϕ ∧ ψ is discharged at the →Intro


inference.
A set Γ is inconsistent iff Γ ` ⊥ in natural deduction. The rule ⊥ I makes
it so that from an inconsistent set, any sentence can be derived.
Natural deduction systems were developed by Gerhard Gentzen and Sta-
nisław Jaśkowski in the 1930s, and later developed by Dag Prawitz and Fred-
eric Fitch. Because its inferences mirror natural methods of proof, it is favored
by philosophers. The versions developed by Fitch are often used in introduc-
tory logic textbooks. In the philosophy of logic, the rules of natural deduc-
tion have sometimes been taken to give the meanings of the logical operators
(“proof-theoretic semantics”).

14.4 Tableaux

While many derivation systems operate with arrangements of sentences, tableaux


operate with signed formulas. A signed formula is a pair consisting of a truth
value sign (T or F) and a sentence

T ϕ or F ϕ.

A tableau consists of signed formulas arranged in a downward-branching


tree. It begins with a number of assumptions and continues with signed for-
mulas which result from one of the signed formulas above it by applying one
of the rules of inference. Each rule allows us to add one or more signed formu-
las to the end of a branch, or two signed formulas side by side—in this case a
branch splits into two, with the two added signed formulas forming the ends
of the two branches.
A rule applied to a complex signed formula results in the addition of
signed formulas which are immediate sub-formulas. They come in pairs, one
rule for each of the two signs. For instance, the ∧T rule applies to T ϕ ∧ ψ,
and allows the addition of both the two signed formulas T ϕ and Tψ to the
end of any branch containing T ϕ ∧ ψ, and the rule ϕ ∧ ψF allows a branch to
be split by adding F ϕ and F ψ side-by-side. A tableau is closed if every one
of its branches contains a matching pair of signed formulas T ϕ and F ϕ.
The ` relation based on tableaux is defined as follows: Γ ` ϕ iff there is
some finite set Γ0 = {ψ1 , . . . , ψn } ⊆ Γ such that there is a closed tableau for
the assumptions

{F ϕ, Tψ1 , . . . , Tψn }

For instance, here is a closed tableau that shows that ` ( ϕ ∧ ψ) → ϕ:

170 Release: (None) ((None))


14.5. AXIOMATIC DERIVATIONS

1. F ( ϕ ∧ ψ) → ϕ Assumption
2. Tϕ ∧ ψ →F 1
3. Fϕ →F 1
4. Tϕ →T 2
5. Tψ →T 2

A set Γ is inconsistent in the tableau calculus if there is a closed tableau for


assumptions
{Tψ1 , . . . , Tψn }
for some ψi ∈ Γ.
The sequent calculus was invented in the 1950s independently by Evert
Beth and Jaakko Hintikka, and simplified and popularized by Raymond Smullyan.
It is very easy to use, since constructing a tableau is a very systematic proce-
dure. Because of the systematic nature of tableaux, they also lend themselves
to implementation by computer. However, tableau is often hard to read and
their connection to proofs are sometimes not easy to see. The approach is also
quite general, and many different logics have tableau systems. Tableaux also
help us to find structures that satisfy given (sets of) sentences: if the set is
satisfiable, it won’t have a closed tableau, i.e., any tableau will have an open
branch. The satisfying structure can be “read off” an open branch, provided
all rules it is possible to apply have been applied on that branch. There is also
a very close connection to the sequent calculus: essentially, a closed tableau is
a condensed derivation in the sequent calculus, written upside-down.

14.5 Axiomatic Derivations


Axiomatic derivations are the oldest and simplest logical derivation systems.
Its derivations are simply sequences of sentences. A sequence of sentences
conunts as a correct derivation if every sentence ϕ in it satisfies one of the
following conditions:

1. ϕ is an axiom, or

2. ϕ is an element of a given set Γ of sentences, or

3. ϕ is justified by a rule of inference.

To be an axiom, ϕ has to have the form of on of a number of fixed sentence


schemas. There are many sets of axiom schemas that provide a satisfactory
(sound and complete) derivation system for first-order logic. Some are orga-
nized according to the connectives they govern, e.g., the schemas

ϕ → (ψ → ϕ) ψ → (ψ ∨ χ) (ψ ∧ χ) → ψ

Release: (None) ((None)) 171


CHAPTER 14. DERIVATION SYSTEMS

are common axioms that govern →, ∨ and ∧. Some axiom systems aim at a
minimal number of axioms. Depending on the connectives that are taken as
primitives, it is even possible to find axiom systems that consist of a single
axiom.
A rule of inference is a conditional statement that gives a sufficient condi-
tion for a sentence in a derivation to be justified. Modus ponens is one very
common such rule: it says that if ϕ and ϕ → ψ are already justified, then ψ is
justified. This means that a line in a derivation containing the sentence ψ is
justified, provided that both ϕ and ϕ → ψ (for some sentence ϕ) appear in the
derivation before ψ.
The ` relation based on axiomatic derivations is defined as follows: Γ ` ϕ
iff there is a derivation with the sentence ϕ as its last formula (and Γ is taken
as the set of sentences in that derivation which are justified by (2) above). ϕ
is a theorem if ϕ has a derivation where Γ is empty, i.e., every sentence in the
derivation is justfied either by (1) or (3). For instance, here is a derivation that
shows that ` ϕ → (ψ → (ψ ∨ ϕ)):

1. ψ → (ψ ∨ ϕ)
2. (ψ → (ψ ∨ ϕ)) → ( ϕ → (ψ → (ψ ∨ ϕ)))
3. ϕ → (ψ → (ψ ∨ ϕ))

The sentence on line 1 is of the form of the axiom ϕ → ( ϕ ∨ ψ) (with the roles
of ϕ and ψ reversed). The sentence on line 2 is of the form of the axiom ϕ →
(ψ → ϕ). Thus, both lines are justified. Line 3 is justified by modus ponens: if
we abbreviate it as θ, then line 2 has the form χ → θ, where χ is ψ → (ψ ∨ ϕ),
i.e., line 1.
A set Γ is inconsistent if Γ ` ⊥. A complete axiom system will also prove
that ⊥ → ϕ for any ϕ, and so if Γ is inconsistent, then Γ ` ϕ for any ϕ.
Systems of axiomatic derivations for logic were first given by Gottlob Frege
in his 1879 Begriffsschrift, which for this reason is often considered the first
work of modern logic. They were perfected in Alfred North Whitehead and
Bertrand Russell’s Principia Mathematica and by David Hilbert and his stu-
dents in the 1920s. They are thus often called “Frege systems” or “Hilbert
systems.” They are very versatile in that it is often easy to find an axiomatic
system for a logic. Because derivations have a very simple structure and only
one or two inference rules, it is also relatively easy to prove things about them.
However, they are very hard to use in practice, i.e., it is difficult to find and
write proofs.

172 Release: (None) ((None))


Chapter 15

The Sequent Calculus

This chapter presents Gentzen’s standard sequent calculus LK for clas-


sical first-order logic. It could use more examples and exercises. To in-
clude or exclude material relevant to the sequent calculus as a proof sys-
tem, use the “prfLK” tag.

15.1 Rules and Derivations


For the following, let Γ, ∆, Π, Λ represent finite sequences of sentences.

Definition 15.1 (Sequent). A sequent is an expression of the form

Γ⇒∆

where Γ and ∆ are finite (possibly empty) sequences of sentences of the lan-
guage L. Γ is called the antecedent, while ∆ is the succedent.

The intuitive idea behind a sequent is: if all of the sentences in the an-
tecedent hold, then at least one of the sentences in the succedent holds. That
is, if Γ = h ϕ1 , . . . , ϕm i and ∆ = hψ1 , . . . , ψn i, then Γ ⇒ ∆ holds iff

( ϕ1 ∧ · · · ∧ ϕm ) → (ψ1 ∨ · · · ∨ ψn )

holds. There are two special cases: where Γ is empty and when ∆ is empty.
When Γ is empty, i.e., m = 0, ⇒ ∆ holds iff ψ1 ∨ · · · ∨ ψn holds. When ∆ is
empty, i.e., n = 0, Γ ⇒ holds iff ¬( ϕ1 ∧ · · · ∧ ϕm ) does. We say a sequent is
valid iff the corresponding sentence is valid.
If Γ is a sequence of sentences, we write Γ, ϕ for the result of appending
ϕ to the right end of Γ (and ϕ, Γ for the result of appending ϕ to the left end
of Γ). If ∆ is a sequence of sentences also, then Γ, ∆ is the concatenation of the
two sequences.

173
CHAPTER 15. THE SEQUENT CALCULUS

Definition 15.2 (Initial Sequent). An initial sequent is a sequent of one of the


following forms:

1. ϕ ⇒ ϕ

2. ⊥ ⇒

for any sentence ϕ in the language.

Derivations in the sequent calculus are certain trees of sequents, where


the topmost sequents are initial sequents, and if a sequent stands below one
or two other sequents, it must follow correctly by a rule of inference. The
rules for LK are divided into two main types: logical rules and structural rules.
The logical rules are named for the main operator of the sentence containing
ϕ and/or ψ in the lower sequent. Each one comes in two versions, one for
inferring a sequent with the sentence containg the logical operator on the left,
and one with the sentence on the right.

15.2 Propositional Rules

Rules for ¬

Γ ⇒ ∆, ϕ ϕ, Γ ⇒ ∆
¬L ¬R
¬ ϕ, Γ ⇒ ∆ Γ ⇒ ∆, ¬ ϕ

Rules for ∧

ϕ, Γ ⇒ ∆
∧L
ϕ ∧ ψ, Γ ⇒ ∆ Γ ⇒ ∆, ϕ Γ ⇒ ∆, ψ
∧R
ψ, Γ ⇒ ∆ Γ ⇒ ∆, ϕ ∧ ψ
∧L
ϕ ∧ ψ, Γ ⇒ ∆

Rules for ∨

Γ ⇒ ∆, ϕ
∨R
ϕ, Γ ⇒ ∆ ψ, Γ ⇒ ∆ Γ ⇒ ∆, ϕ ∨ ψ
∨L
ϕ ∨ ψ, Γ ⇒ ∆ Γ ⇒ ∆, ψ
∨R
Γ ⇒ ∆, ϕ ∨ ψ

174 Release: (None) ((None))


15.3. QUANTIFIER RULES

Rules for →

Γ ⇒ ∆, ϕ ψ, Π ⇒ Λ ϕ, Γ ⇒ ∆, ψ
→L →R
ϕ → ψ, Γ, Π ⇒ ∆, Λ Γ ⇒ ∆, ϕ → ψ

15.3 Quantifier Rules

Rules for ∀

ϕ ( t ), Γ ⇒ ∆ Γ ⇒ ∆, ϕ( a)
∀L ∀R
∀ x ϕ ( x ), Γ ⇒ ∆ Γ ⇒ ∆, ∀ x ϕ( x )

In ∀L, t is a closed term (i.e., one without variables). In ∀R, a is a constant


symbol which must not occur anywhere in the lower sequent of the ∀R rule.
We call a the eigenvariable of the ∀R inference.

Rules for ∃

ϕ ( a ), Γ ⇒ ∆ Γ ⇒ ∆, ϕ(t)
∃L ∃R
∃ x ϕ ( x ), Γ ⇒ ∆ Γ ⇒ ∆, ∃ x ϕ( x )

Again, t is a closed term, and a is a constant symbol which does not occur in
the lower sequent of the ∃L rule. We call a the eigenvariable of the ∃L inference.
The condition that an eigenvariable not occur in the lower sequent of the
∀R or ∃L inference is called the eigenvariable condition.
We use the term “eigenvariable” even though a in the above rules is a con-
stant symbol. This has historical reasons.
In ∃R and ∀L there are no restrictions on the term t. On the other hand,
in the ∃L and ∀R rules, the eigenvariable condition requires that the constant
symbol a does not occur anywhere outside of ϕ( a) in the upper sequent. It is
necessary to ensure that the system is sound, i.e., only derives sequents that
are valid. Without this condition, the following would be allowed:

ϕ( a) ⇒ ϕ( a) ϕ( a) ⇒ ϕ( a)
*∃L *∀R
∃ x ϕ( x ) ⇒ ϕ( a) ϕ( a) ⇒ ∀ x ϕ( x )
∀R ∃L
∃ x ϕ( x ) ⇒ ∀ x ϕ( x ) ∃ x ϕ( x ) ⇒ ∀ x ϕ( x )
However, ∃ x ϕ( x ) ⇒ ∀ x ϕ( x ) is not valid.

Release: (None) ((None)) 175


CHAPTER 15. THE SEQUENT CALCULUS

15.4 Structural Rules


We also need a few rules that allow us to rearrange sentences in the left and
right side of a sequent. Since the logical rules require that the sentences in the
premise which the rule acts upon stand either to the far left or to the far right,
we need an “exchange” rule that allows us to move sentences to the right
position. It’s also important sometimes to be able to combine two identical
sentences into one, and to add a sentence on either side.

Weakening

Γ ⇒ ∆ Γ ⇒ ∆
WL WR
ϕ, Γ ⇒ ∆ Γ ⇒ ∆, ϕ

Contraction

ϕ, ϕ, Γ ⇒ ∆ Γ ⇒ ∆, ϕ, ϕ
CL CR
ϕ, Γ ⇒ ∆ Γ ⇒ ∆, ϕ

Exchange

Γ, ϕ, ψ, Π ⇒ ∆ Γ ⇒ ∆, ϕ, ψ, Λ
XL XR
Γ, ψ, ϕ, Π ⇒ ∆ Γ ⇒ ∆, ψ, ϕ, Λ

A series of weakening, contraction, and exchange inferences will often be in-


dicated by double inference lines.
The following rule, called “cut,” is not strictly speaking necessary, but
makes it a lot easier to reuse and combine derivations.

Γ ⇒ ∆, ϕ ϕ, Π ⇒ Λ
Cut
Γ, Π ⇒ ∆, Λ

15.5 Derivations
We’ve said what an initial sequent looks like, and we’ve given the rules of
inference. Derivations in the sequent calculus are inductively generated from

176 Release: (None) ((None))


15.5. DERIVATIONS

these: each derivation either is an initial sequent on its own, or consists of one
or two derivations followed by an inference.

Definition 15.3 (LK derivation). An LK-derivation of a sequent S is a tree of


sequents satisfying the following conditions:

1. The topmost sequents of the tree are initial sequents.

2. The bottommost sequent of the tree is S.

3. Every sequent in the tree except S is a premise of a correct application of


an inference rule whose conclusion stands directly below that sequent
in the tree.

We then say that S is the end-sequent of the derivation and that S is derivable in
LK (or LK-derivable).

Example 15.4. Every initial sequent, e.g., χ ⇒ χ is a derivation. We can obtain


a new derivation from this by applying, say, the WL rule,

Γ ⇒ ∆
WL
ϕ, Γ ⇒ ∆
The rule, however, is meant to be general: we can replace the ϕ in the rule
with any sentence, e.g., also with θ. If the premise matches our initial sequent
χ ⇒ χ, that means that both Γ and ∆ are just χ, and the conclusion would
then be θ, χ ⇒ χ. So, the following is a derivation:

χ ⇒ χ
WL
θ, χ ⇒ χ
We can now apply another rule, say XL, which allows us to switch two sen-
tences on the left. So, the following is also a correct derivation:

χ ⇒ χ
WL
θ, χ ⇒ χ
XL
χ, θ ⇒ χ
In this application of the rule, which was given as

Γ, ϕ, ψ, Π ⇒ ∆
XL
Γ, ψ, ϕ, Π ⇒ ∆,
both Γ and Π were empty, ∆ is χ, and the roles of ϕ and ψ are played by θ
and χ, respectively. In much the same way, we also see that

θ ⇒ θ
WL
χ, θ ⇒ θ

Release: (None) ((None)) 177


CHAPTER 15. THE SEQUENT CALCULUS

is a derivation. Now we can take these two derivations, and combine them
using ∧R. That rule was

Γ ⇒ ∆, ϕ Γ ⇒ ∆, ψ
∧R
Γ ⇒ ∆, ϕ ∧ ψ
In our case, the premises must match the last sequents of the derivations end-
ing in the premises. That means that Γ is χ, θ, ∆ is empty, ϕ is χ and ψ is θ. So
the conclusion, if the inference should be correct, is χ, θ ⇒ χ ∧ θ. Of course,
we can also reverse the premises, then ϕ would be θ and ψ would be χ. So
both of the following are correct derivations.
χ ⇒ χ χ ⇒ χ
WL WL
θ, χ ⇒ χ θ ⇒ θ θ ⇒ θ θ, χ ⇒ χ
XL WL WL XL
χ, θ ⇒ χ χ, θ ⇒ θ χ, θ ⇒ θ χ, θ ⇒ χ
∧R ∧R
χ, θ ⇒ χ ∧ θ χ, θ ⇒ θ ∧ χ

15.6 Examples of Derivations


Example 15.5. Give an LK-derivation for the sequent ϕ ∧ ψ ⇒ ϕ.
We begin by writing the desired end-sequent at the bottom of the deriva-
tion.

ϕ∧ψ ⇒ ϕ
Next, we need to figure out what kind of inference could have a lower sequent
of this form. This could be a structural rule, but it is a good idea to start by
looking for a logical rule. The only logical connective occurring in the lower
sequent is ∧, so we’re looking for an ∧ rule, and since the ∧ symbol occurs in
the antecedent, we’re looking at the ∧L rule.

ϕ∧ψ ⇒ ϕ
∧L

There are two options for what could have been the upper sequent of the ∧L
inference: we could have an upper sequent of ϕ ⇒ ϕ, or of ψ ⇒ ϕ. Clearly,
ϕ ⇒ ϕ is an initial sequent (which is a good thing), while ψ ⇒ ϕ is not
derivable in general. We fill in the upper sequent:
ϕ ⇒ ϕ
ϕ∧ψ ⇒ ϕ
∧L

We now have a correct LK-derivation of the sequent ϕ ∧ ψ ⇒ ϕ.

Example 15.6. Give an LK-derivation for the sequent ¬ ϕ ∨ ψ ⇒ ϕ → ψ.


Begin by writing the desired end-sequent at the bottom of the derivation.

¬ϕ ∨ ψ ⇒ ϕ → ψ

178 Release: (None) ((None))


15.6. EXAMPLES OF DERIVATIONS

To find a logical rule that could give us this end-sequent, we look at the log-
ical connectives in the end-sequent: ¬, ∨, and →. We only care at the mo-
ment about ∨ and → because they are main operators of sentences in the end-
sequent, while ¬ is inside the scope of another connective, so we will take care
of it later. Our options for logical rules for the final inference are therefore the
∨L rule and the →R rule. We could pick either rule, really, but let’s pick the
→R rule (if for no reason other than it allows us to put off splitting into two
branches). According to the form of →R inferences which can yield the lower
sequent, this must look like:

ϕ, ¬ ϕ ∨ ψ ⇒ ψ
¬ ϕ ∨ ψ ⇒ ϕ → ψ →R
If we move ¬ ϕ ∨ ψ to the outside of the antecedent, we can apply the ∨L
rule. According to the schema, this must split into two upper sequents as
follows:

¬ ϕ, ϕ ⇒ ψ ψ, ϕ ⇒ ψ
¬ ϕ ∨ ψ, ϕ ⇒ ψ ∨L
ϕ, ¬ ϕ ∨ ψ ⇒ ψ XR
¬ϕ ∨ ψ ⇒ ϕ → ψ →R
Remember that we are trying to wind our way up to initial sequents; we seem
to be pretty close! The right branch is just one weakening and one exchange
away from an initial sequent and then it is done:

ψ ⇒ ψ
WL
ϕ, ψ ⇒ ψ
XL
¬ ϕ, ϕ ⇒ ψ ψ, ϕ ⇒ ψ
¬ ϕ ∨ ψ, ϕ ⇒ ψ ∨L
XR
ϕ, ¬ ϕ ∨ ψ ⇒ ψ
¬ϕ ∨ ψ ⇒ ϕ → ψ →R

Now looking at the left branch, the only logical connective in any sentence
is the ¬ symbol in the antecedent sentences, so we’re looking at an instance of
the ¬L rule.
ψ ⇒ ψ
WL
ϕ ⇒ ψ, ϕ ϕ, ψ ⇒ ψ
¬ ϕ, ϕ ⇒ ψ ¬L ψ, ϕ ⇒ ψ
XL
¬ ϕ ∨ ψ, ϕ ⇒ ψ
∨L
XR
ϕ, ¬ ϕ ∨ ψ ⇒ ψ
¬ϕ ∨ ψ ⇒ ϕ→ψ
→R

Similarly to how we finished off the right branch, we are just one weakening
and one exchange away from finishing off this left branch as well.

Release: (None) ((None)) 179


CHAPTER 15. THE SEQUENT CALCULUS

ϕ ⇒ ϕ
ϕ ⇒ ϕ, ψ WR ψ ⇒ ψ
ϕ ⇒ ψ, ϕ XR ϕ, ψ ⇒ ψ
WL
¬ ϕ, ϕ ⇒ ψ ¬L ψ, ϕ ⇒ ψ
XL
¬ ϕ ∨ ψ, ϕ ⇒ ψ
∨L
XR
ϕ, ¬ ϕ ∨ ψ ⇒ ψ
¬ϕ ∨ ψ ⇒ ϕ→ψ
→R

Example 15.7. Give an LK-derivation of the sequent ¬ ϕ ∨ ¬ψ ⇒ ¬( ϕ ∧ ψ)


Using the techniques from above, we start by writing the desired end-
sequent at the bottom.

¬ ϕ ∨ ¬ψ ⇒ ¬( ϕ ∧ ψ)
The available main connectives of sentences in the end-sequent are the ∨ sym-
bol and the ¬ symbol. It would work to apply either the ∨L or the ¬R rule
here, but we start with the ¬R rule because it avoids splitting up into two
branches for a moment:

ϕ ∧ ψ, ¬ ϕ ∨ ¬ψ ⇒
¬R
¬ ϕ ∨ ¬ψ ⇒ ¬( ϕ ∧ ψ)
Now we have a choice of whether to look at the ∧L or the ∨L rule. Let’s see
what happens when we apply the ∧L rule: we have a choice to start with
either the sequent ϕ, ¬ ϕ ∨ ψ ⇒ or the sequent ψ, ¬ ϕ ∨ ψ ⇒ . Since the
proof is symmetric with regards to ϕ and ψ, let’s go with the former:

ϕ, ¬ ϕ ∨ ¬ψ ⇒
ϕ ∧ ψ, ¬ ϕ ∨ ¬ψ ⇒
∧L
¬R
¬ ϕ ∨ ¬ψ ⇒ ¬( ϕ ∧ ψ)
Continuing to fill in the derivation, we see that we run into a problem:

?
ϕ ⇒ ϕ ϕ ⇒ ψ
¬ ϕ, ϕ ⇒ ¬L ¬ψ, ϕ ⇒ ¬L
¬ ϕ ∨ ¬ψ, ϕ ⇒ ∨ L
ϕ, ¬ ϕ ∨ ¬ψ ⇒ XL
ϕ ∧ ψ, ¬ ϕ ∨ ¬ψ ⇒ ∧L
¬R
¬ ϕ ∨ ¬ψ ⇒ ¬( ϕ ∧ ψ)
The top of the right branch cannot be reduced any further, and it cannot be
brought by way of structural inferences to an initial sequent, so this is not the
right path to take. So clearly, it was a mistake to apply the ∧L rule above.
Going back to what we had before and carrying out the ∨L rule instead, we
get

180 Release: (None) ((None))


15.6. EXAMPLES OF DERIVATIONS

¬ ϕ, ϕ ∧ ψ ⇒ ¬ψ, ϕ ∧ ψ ⇒
¬ ϕ ∨ ¬ψ, ϕ ∧ ψ ⇒ ∨L
XL
ϕ ∧ ψ, ¬ ϕ ∨ ¬ψ ⇒
¬R
¬ ϕ ∨ ¬ψ ⇒ ¬( ϕ ∧ ψ)
Completing each branch as we’ve done before, we get

ϕ ⇒ ϕ ψ ⇒ ψ
ϕ∧ψ ⇒ ϕ
∧L ϕ∧ψ ⇒ ψ
∧L
¬ ϕ, ϕ ∧ ψ ⇒ ¬ L
¬ψ, ϕ ∧ ψ ⇒ ¬L
¬ ϕ ∨ ¬ψ, ϕ ∧ ψ ⇒ ∨ L
XL
ϕ ∧ ψ, ¬ ϕ ∨ ¬ψ ⇒
¬R
¬ ϕ ∨ ¬ψ ⇒ ¬( ϕ ∧ ψ)
(We could have carried out the ∧ rules lower than the ¬ rules in these steps
and still obtained a correct derivation).

Example 15.8. So far we haven’t used the contraction rule, but it is sometimes
required. Here’s an example where that happens. Suppose we want to prove
⇒ A ∨ ¬ ϕ. Applying ∨R backwards would give us one of these two deriva-
tions:
ϕ ⇒
⇒ ϕ
⇒ ¬ ϕ ¬R
⇒ ϕ ∨ ¬ ϕ ∨R
⇒ ϕ ∨ ¬ ϕ ∨R
Neither of these of course ends in an initial sequent. The trick is to realize that
the contraction rule allows us to combine two copies of a sentence into one—
and when we’re searching for a proof, i.e., going from bottom to top, we can
keep a copy of ϕ ∨ ¬ ϕ in the premise, e.g.,

⇒ ϕ ∨ ¬ ϕ, ϕ
⇒ ϕ ∨ ¬ ϕ, ϕ ∨ ¬ ϕ ∨R
⇒ ϕ ∨ ¬ϕ CR

Now we can apply ∨R a second time, and also get ¬ ϕ, which leads to a com-
plete derivation.

ϕ ⇒ ϕ
⇒ ϕ, ¬ ϕ ¬R
⇒ ϕ, ϕ ∨ ¬ ϕ ∨R
⇒ ϕ ∨ ¬ ϕ, ϕ XR
⇒ ϕ ∨ ¬ ϕ, ϕ ∨ ¬ ϕ ∨R
⇒ ϕ ∨ ¬ϕ CR

Release: (None) ((None)) 181


CHAPTER 15. THE SEQUENT CALCULUS

15.7 Derivations with Quantifiers


Example 15.9. Give an LK-derivation of the sequent ∃ x ¬ ϕ( x ) ⇒ ¬∀ x ϕ( x ).
When dealing with quantifiers, we have to make sure not to violate the
eigenvariable condition, and sometimes this requires us to play around with
the order of carrying out certain inferences. In general, it helps to try and take
care of rules subject to the eigenvariable condition first (they will be lower
down in the finished proof). Also, it is a good idea to try and look ahead and
try to guess what the initial sequent might look like. In our case, it will have to
be something like ϕ( a) ⇒ ϕ( a). That means that when we are “reversing” the
quantifier rules, we will have to pick the same term—what we will call a—for
both the ∀ and the ∃ rule. If we picked different terms for each rule, we would
end up with something like ϕ( a) ⇒ ϕ(b), which, of course, is not derivable.
Starting as usual, we write

∃ x ¬ ϕ( x ) ⇒ ¬∀ x ϕ( x )
We could either carry out the ∃L rule or the ¬R rule. Since the ∃L rule is
subject to the eigenvariable condition, it’s a good idea to take care of it sooner
rather than later, so we’ll do that one first.

¬ ϕ( a) ⇒ ¬∀ x ϕ( x )
∃L
∃ x ¬ ϕ( x ) ⇒ ¬∀ x ϕ( x )
Applying the ¬L and ¬R rules backwards, we get

∀ x ϕ( x ) ⇒ ϕ( a)
¬L
¬ ϕ ( a ), ∀ x ϕ ( x ) ⇒
XL
∀ x ϕ ( x ), ¬ ϕ ( a ) ⇒
¬R
¬ ϕ( a) ⇒ ¬∀ xϕ( x )
∃L
∃ x ¬ ϕ( x ) ⇒ ¬∀ xϕ( x )
At this point, our only option is to carry out the ∀L rule. Since this rule is not
subject to the eigenvariable restriction, we’re in the clear. Remember, we want
to try and obtain an initial sequent (of the form ϕ( a) ⇒ ϕ( a)), so we should
choose a as our argument for ϕ when we apply the rule.

ϕ( a) ⇒ ϕ( a)
∀L
∀ x ϕ( x ) ⇒ ϕ( a)
¬L
¬ ϕ ( a ), ∀ x ϕ ( x ) ⇒
XL
∀ x ϕ ( x ), ¬ ϕ ( a ) ⇒
¬R
¬ ϕ( a) ⇒ ¬∀ x ϕ( x )
∃L
∃ x ¬ ϕ( x ) ⇒ ¬∀ x ϕ( x )

182 Release: (None) ((None))


15.8. PROOF-THEORETIC NOTIONS

It is important, especially when dealing with quantifiers, to double check at


this point that the eigenvariable condition has not been violated. Since the
only rule we applied that is subject to the eigenvariable condition was ∃L,
and the eigenvariable a does not occur in its lower sequent (the end-sequent),
this is a correct derivation.

This section collects the definitions of the provability relation and con-
sistency for natural deduction.

15.8 Proof-Theoretic Notions


Just as we’ve defined a number of important semantic notions (validity, entail-
ment, satisfiabilty), we now define corresponding proof-theoretic notions. These
are not defined by appeal to satisfaction of sentences in structures, but by ap-
peal to the derivability or non-derivability of certain sequents. It was an im-
portant discovery that these notions coincide. That they do is the content of
the soundness and completeness theorem.

Definition 15.10 (Theorems). A sentence ϕ is a theorem if there is a derivation


in LK of the sequent ⇒ ϕ. We write ` ϕ if ϕ is a theorem and 0 ϕ if it is not.

Definition 15.11 (Derivability). A sentence ϕ is derivable from a set of sen-


tences Γ, Γ ` ϕ, iff there is a finite subset Γ0 ⊆ Γ and a sequence Γ00 of the
sentences in Γ0 such that LK derives Γ00 ⇒ ϕ. If ϕ is not derivable from Γ we
write Γ 0 ϕ.

Because of the contraction, weakening, and exchange rules, the order and
number of sentences in Γ00 does not matter: if a sequent Γ00 ⇒ ϕ is deriv-
able, then so is Γ000 ⇒ ϕ for any Γ000 that contains the same sentences as Γ00 .
For instance, if Γ0 = {ψ, χ} then both Γ00 = hψ, ψ, χi and Γ000 = hχ, χ, ψi are
sequences containing just the sentences in Γ0 . If a sequent containing one is
derivable, so is the other, e.g.:

ψ, ψ, χ ⇒ ϕ
CL
ψ, χ ⇒ ϕ
XL
χ, ψ ⇒ ϕ
WL
χ, χ, ψ ⇒ ϕ
From now on we’ll say that if Γ0 is a finite set of sentences then Γ0 ⇒ ϕ is
any sequent where the antecedent is a sequence of sentences in Γ0 and tacitly
include contractions, exchanges, and weakenings if necessary.

Release: (None) ((None)) 183


CHAPTER 15. THE SEQUENT CALCULUS

Definition 15.12 (Consistency). A set of sentences Γ is inconsistent iff there is a


finite subset Γ0 ⊆ Γ such that LK derives Γ0 ⇒ . If Γ is not inconsistent, i.e.,
if for every finite Γ0 ⊆ Γ, LK does not derive Γ0 ⇒ , we say it is consistent.

Proposition 15.13 (Reflexivity). If ϕ ∈ Γ, then Γ ` ϕ.

Proof. The initial sequent ϕ ⇒ ϕ is derivable, and { ϕ} ⊆ Γ.

Proposition 15.14 (Monotony). If Γ ⊆ ∆ and Γ ` ϕ, then ∆ ` ϕ.

Proof. Suppose Γ ` ϕ, i.e., there is a finite Γ0 ⊆ Γ such that Γ0 ⇒ ϕ is deriv-


able. Since Γ ⊆ ∆, then Γ0 is also a finite subset of ∆. The derivation of Γ0 ⇒ ϕ
thus also shows ∆ ` ϕ.

Proposition 15.15 (Transitivity). If Γ ` ϕ and { ϕ} ∪ ∆ ` ψ, then Γ ∪ ∆ ` ψ.

Proof. If Γ ` ϕ, there is a finite Γ0 ⊆ Γ and a derivation π0 of Γ0 ⇒ ϕ. If


{ ϕ} ∪ ∆ ` ψ, then for some finite subset ∆ 0 ⊆ ∆, there is a derivation π1 of
ϕ, ∆ 0 ⇒ ψ. Consider the following derivation:

π0 π1

Γ0 ⇒ ϕ ϕ, ∆ 0 ⇒ ψ
Cut
Γ0 , ∆ 0 ⇒ ψ
Since Γ0 ∪ ∆ 0 ⊆ Γ ∪ ∆, this shows Γ ∪ ∆ ` ψ.

Note that this means that in particular if Γ ` ϕ and ϕ ` ψ, then Γ ` ψ. It


follows also that if ϕ1 , . . . , ϕn ` ψ and Γ ` ϕi for each i, then Γ ` ψ.

Proposition 15.16. Γ is inconsistent iff Γ ` ϕ for every sentence ϕ.

Proof. Exercise.

Proposition 15.17 (Compactness). 1. If Γ ` ϕ then there is a finite subset


Γ0 ⊆ Γ such that Γ0 ` ϕ.

2. If every finite subset of Γ is consistent, then Γ is consistent.

Proof. 1. If Γ ` ϕ, then there is a finite subset Γ0 ⊆ Γ such that the sequent


Γ0 ⇒ ϕ has a derivation. Consequently, Γ0 ` ϕ.

2. If Γ is inconsistent, there is a finite subset Γ0 ⊆ Γ such that LK derives


Γ0 ⇒ . But then Γ0 is a finite subset of Γ that is inconsistent.

184 Release: (None) ((None))


15.9. DERIVABILITY AND CONSISTENCY

15.9 Derivability and Consistency


We will now establish a number of properties of the derivability relation. They
are independently interesting, but each will play a role in the proof of the
completeness theorem.

Proposition 15.18. If Γ ` ϕ and Γ ∪ { ϕ} is inconsistent, then Γ is inconsistent.

Proof. There are finite Γ0 and Γ1 ⊆ Γ such that LK derives Γ0 ⇒ ϕ and ϕ, Γ1 ⇒


. Let the LK-derivation of Γ0 ⇒ ϕ be π0 and the LK-derivation of Γ1 , ϕ ⇒
be π1 . We can then derive

π0 π1

Γ0 ⇒ ϕ ϕ, Γ1 ⇒
Cut
Γ0 , Γ1 ⇒
Since Γ0 ⊆ Γ and Γ1 ⊆ Γ, Γ0 ∪ Γ1 ⊆ Γ, hence Γ is inconsistent.

Proposition 15.19. Γ ` ϕ iff Γ ∪ {¬ ϕ} is inconsistent.

Proof. First suppose Γ ` ϕ, i.e., there is a derivation π0 of Γ ⇒ ϕ. By adding


a ¬L rule, we obtain a derivation of ¬ ϕ, Γ ⇒ , i.e., Γ ∪ {¬ ϕ} is inconsistent.
If Γ ∪ {¬ A} is inconsistent, there is a derivation π1 of ¬ ϕ, Γ ⇒ . The
following is a derivation of Γ ⇒ ϕ:

π1
ϕ ⇒ ϕ
⇒ ϕ, ¬ ϕ ¬R ¬ ϕ, Γ ⇒
Cut
Γ ⇒ ϕ

Proposition 15.20. If Γ ` ϕ and ¬ ϕ ∈ Γ, then Γ is inconsistent.

Proof. Suppose Γ ` ϕ and ¬ ϕ ∈ Γ. Then there is a derivation π of a sequent


Γ0 ⇒ ϕ. The sequent ¬ ϕ, Γ0 ⇒ is also derivable:

π ϕ ⇒ ϕ
¬ ϕ, ϕ ⇒ ¬L
Γ0 ⇒ ϕ ϕ, ¬ ϕ ⇒ XL
Cut
Γ, ¬ ϕ ⇒
Since ¬ ϕ ∈ Γ and Γ0 ⊆ Γ, this shows that Γ is inconsistent.

Proposition 15.21. If Γ ∪ { ϕ} and Γ ∪ {¬ ϕ} are both inconsistent, then Γ is in-


consistent.

Release: (None) ((None)) 185


CHAPTER 15. THE SEQUENT CALCULUS

Proof. There are finite sets Γ0 ⊆ Γ and Γ1 ⊆ Γ and LK-derivations π0 and π1


of ϕ, Γ0 ⇒ and ¬ ϕ, Γ1 ⇒ , respectively. We can then derive

π0
π1
ϕ, Γ0 ⇒
¬R
Γ0 ⇒ ¬ ϕ ¬ ϕ, Γ1 ⇒
Cut
Γ0 , Γ1 ⇒
Since Γ0 ⊆ Γ and Γ1 ⊆ Γ, Γ0 ∪ Γ1 ⊆ Γ. Hence Γ is inconsistent.

15.10 Derivability and the Propositional Connectives


Proposition 15.22. 1. Both ϕ ∧ ψ ` ϕ and ϕ ∧ ψ ` ψ.

2. ϕ, ψ ` ϕ ∧ ψ.

Proof. 1. Both sequents ϕ ∧ ψ ⇒ ϕ and ϕ ∧ ψ ⇒ ψ are derivable:

ϕ ⇒ ϕ ψ ⇒ ψ
ϕ∧ψ ⇒ ϕ
∧L ∧L
ϕ∧ψ ⇒ ψ

2. Here is a derivation of the sequent ϕ, ψ ⇒ ϕ ∧ ψ:

ϕ ⇒ ϕ ψ ⇒ ψ
ϕ, ψ ⇒ ϕ ∧ ψ
∧R

Proposition 15.23. 1. ϕ ∨ ψ, ¬ ϕ, ¬ψ is inconsistent.

2. Both ϕ ` ϕ ∨ ψ and ψ ` ϕ ∨ ψ.

Proof. 1. We give a derivation of the sequent ϕ ∨ ψ, ¬ ϕ, ¬ψ ⇒:

ϕ ⇒ ϕ ψ ⇒ ψ
¬ ϕ, ϕ ⇒ ¬L ¬ψ, ψ ⇒ ¬L
ϕ, ¬ ϕ, ¬ψ ⇒ ψ, ¬ ϕ, ¬ψ ⇒
ϕ ∨ ψ, ¬ ϕ, ¬ψ ⇒
∨L

(Recall that double inference lines indicate several weakening, contrac-


tion, and exchange inferences.)

2. Both sequents ϕ ⇒ ϕ ∨ ψ and ψ ⇒ ϕ ∨ ψ have derivations:

186 Release: (None) ((None))


15.11. DERIVABILITY AND THE QUANTIFIERS

ϕ ⇒ ϕ ψ ⇒ ψ
ϕ ⇒ ϕ∨ψ
∨R ∨R
ψ ⇒ ϕ∨ψ

Proposition 15.24. 1. ϕ, ϕ → ψ ` ψ.

2. Both ¬ ϕ ` ϕ → ψ and ψ ` ϕ → ψ.

Proof. 1. The sequent ϕ → ψ, ϕ ⇒ ψ is derivable:

ϕ ⇒ ϕ ψ ⇒ ψ
ϕ → ψ, ϕ ⇒ ψ
→L

2. Both sequents ¬ ϕ ⇒ ϕ → ψ and ψ ⇒ ϕ → ψ are derivable:

ϕ ⇒ ϕ
¬ ϕ, ϕ ⇒ ¬L ψ ⇒ ψ
ϕ, ¬ ϕ ⇒ XL WL
ϕ, ψ ⇒ ψ
ϕ, ¬ ϕ ⇒ ψ WR ψ ⇒ ϕ→ψ
→R
¬ϕ ⇒ ϕ → ψ →R

15.11 Derivability and the Quantifiers


Theorem 15.25. If c is a constant not occurring in Γ or ϕ( x ) and Γ ` ϕ(c), then
Γ ` ∀ x ϕ ( x ).

Proof. Let π0 be an LK-derivation of Γ0 ⇒ ϕ(c) for some finite Γ0 ⊆ Γ. By


adding a ∀R inference, we obtain a proof of Γ0 ⇒ ∀ x ϕ( x ), since c does not
occur in Γ or ϕ( x ) and thus the eigenvariable condition is satisfied.

Proposition 15.26. 1. ϕ(t) ` ∃ x ϕ( x ).

2. ∀ x ϕ( x ) ` ϕ(t).

Proof. 1. The sequent ϕ(t) ⇒ ∃ x ϕ( x ) is derivable:

ϕ(t) ⇒ ϕ(t)
∃R
ϕ(t) ⇒ ∃ x ϕ( x )

2. The sequent ∀ x ϕ( x ) ⇒ ϕ(t) is derivable:

ϕ(t) ⇒ ϕ(t)
∀L
∀ x ϕ( x ) ⇒ ϕ(t)

Release: (None) ((None)) 187


CHAPTER 15. THE SEQUENT CALCULUS

15.12 Soundness
A derivation system, such as the sequent calculus, is sound if it cannot de-
rive things that do not actually hold. Soundness is thus a kind of guaranteed
safety property for derivation systems. Depending on which proof theoretic
property is in question, we would like to know for instance, that

1. every derivable ϕ is valid;

2. if a sentence is derivable from some others, it is also a consequence of


them;

3. if a set of sentences is inconsistent, it is unsatisfiable.

These are important properties of a derivation system. If any of them do not


hold, the derivation system is deficient—it would derive too much. Conse-
quently, establishing the soundness of a derivation system is of the utmost
importance.
Because all these proof-theoretic properties are defined via derivability in
the sequent calculus of certain sequents, proving (1)–(3) above requires prov-
ing something about the semantic properties of derivable sequents. We will
first define what it means for a sequent to be valid, and then show that every
derivable sequent is valid. (1)–(3) then follow as corollaries from this result.

Definition 15.27. A structure M satisfies a sequent Γ ⇒ ∆ iff either M 2 ϕ for


some ϕ ∈ Γ or M  ϕ for some ϕ ∈ ∆.
A sequent is valid iff every structure M satisfies it.

Theorem 15.28 (Soundness). If LK derives Θ ⇒ Ξ, then Θ ⇒ Ξ is valid.

Proof. Let π be a derivation of Θ ⇒ Ξ. We proceed by induction on the num-


ber of inferences n in π.
If the number of inferences is 0, then π consists only of an initial sequent.
Every initial sequent ϕ ⇒ ϕ is obviously valid, since for every M, either M 2
ϕ or M  ϕ.
If the number of inferences is greater than 0, we distinguish cases accord-
ing to the type of the lowermost inference. By induction hypothesis, we can
assume that the premises of that inference are valid, since the number of in-
ferences in the proof of any premise is smaller than n.
First, we consider the possible inferences with only one premise.

1. The last inference is a weakening. Then Θ ⇒ Ξ is either A, Γ ⇒ ∆ (if the


last inference is WL) or Γ ⇒ ∆, ϕ (if it’s WR), and the derivation ends in
one of

188 Release: (None) ((None))


15.12. SOUNDNESS

Γ ⇒ ∆ Γ ⇒ ∆
WL WR
ϕ, Γ ⇒ ∆ Γ ⇒ ∆, ϕ

By induction hypothesis, Γ ⇒ ∆ is valid, i.e., for every structure M,


either there is some χ ∈ Γ such that M 2 χ or there is some χ ∈ ∆ such
that M  χ.
If M 2 χ for some χ ∈ Γ, then χ ∈ Θ as well since Θ = ϕ, Γ, and so
M 2 χ for some χ ∈ Θ. Similarly, if M  χ for some χ ∈ ∆, as χ ∈ Ξ,
M  χ for some χ ∈ Ξ. Consequently, Θ ⇒ Ξ is valid.

2. The last inference is ¬L: Then the premise of the last inference is Γ ⇒
∆, ϕ and the conclusion is ¬ ϕ, Γ ⇒ ∆, i.e., the derivation ends in

Γ ⇒ ∆, ϕ
¬L
¬ ϕ, Γ ⇒ ∆

and Θ = ¬ ϕ, Γ while Ξ = ∆.
The induction hypothesis tells us that Γ ⇒ ∆, ϕ is valid, i.e., for every
M, either (a) for some χ ∈ Γ, M 2 χ, or (b) for some χ ∈ ∆, M  χ, or (c)
M  ϕ. We want to show that Θ ⇒ Ξ is also valid. Let M be a structure.
If (a) holds, then there is χ ∈ Γ so that M 2 ϕ, but ϕ ∈ Θ as well. If
(b) holds, there is χ ∈ ∆ such that M  χ, but χ ∈ Ξ as well. Finally, if
M  ϕ, then M 2 ¬ ϕ. Since ¬ ϕ ∈ Θ, there is χ ∈ Θ such that M 2 χ.
Consequently, Θ ⇒ Ξ is valid.

3. The last inference is ¬R: Exercise.

4. The last inference is ∧L: There are two variants: ϕ ∧ ψ may be inferred
on the left from ϕ or from ψ on the left side of the premise. In the first
case, the π ends in

ϕ, Γ ⇒ ∆
∧L
ϕ ∧ ψ, Γ ⇒ ∆

and Θ = ϕ ∧ ψ, Γ while Ξ = ∆. Consider a structure M. Since by


induction hypothesis, ϕ, Γ ⇒ ∆ is valid, (a) M 2 ϕ, (b) M 2 χ for some
χ ∈ Γ, or (c) M  χ for some χ ∈ ∆. In case (a), M 2 ϕ ∧ ψ, so there

Release: (None) ((None)) 189


CHAPTER 15. THE SEQUENT CALCULUS

is χ ∈ Θ (namely, ϕ ∧ ψ) such that M 2 χ. In case (b), there is χ ∈ Γ


such that M 2 χ, and χ ∈ Θ as well. In case (c), there is χ ∈ ∆ such
that M  χ, and χ ∈ Ξ as well since Ξ = ∆. So in each case, M satisfies
ϕ ∧ ψ, Γ ⇒ ∆. Since M was arbitrary, Γ ⇒ ∆ is valid. The case where
ϕ ∧ ψ is inferred from ψ is handled the same, changing ϕ to ψ.

5. The last inference is ∨R: There are two variants: ϕ ∨ ψ may be inferred
on the right from ϕ or from ψ on the right side of the premise. In the first
case, π ends in

Γ ⇒ ∆, ϕ
∨R
Γ ⇒ ∆, ϕ ∨ ψ

Now Θ = Γ and Ξ = ∆, ϕ ∨ ψ. Consider a structure M. Since Γ ⇒ ∆, ϕ


is valid, (a) M  ϕ, (b) M 2 χ for some χ ∈ Γ, or (c) M  χ for some
χ ∈ ∆. In case (a), M  ϕ ∨ ψ. In case (b), there is χ ∈ Γ such that M 2 χ.
In case (c), there is χ ∈ ∆ such that M  χ. So in each case, M satisfies
Γ ⇒ ∆, ϕ ∨ ψ, i.e., Θ ⇒ Ξ. Since M was arbitrary, Θ ⇒ Ξ is valid. The
case where ϕ ∨ ψ is inferred from ψ is handled the same, changing ϕ to
ψ.

6. The last inference is →R: Then π ends in

ϕ, Γ ⇒ ∆, ϕ
→R
Γ ⇒ ∆, ϕ → ψ

Again, the induction hypothesis says that the premise is valid; we want
to show that the conclusion is valid as well. Let M be arbitrary. Since
ϕ, Γ ⇒ ∆, ψ is valid, at least one of the following cases obtains: (a) M 2
ϕ, (b) M  ψ, (c) M 2 χ for some χ ∈ Γ, or (c) M  χ for some χ ∈ ∆.
In cases (a) and (b), M  ϕ → ψ and so there is a χ ∈ ∆, ϕ → ψ such that
M  χ. In case (c), for some χ ∈ Γ, M 2 χ. In case (d), for some χ ∈ ∆,
M  χ. In each case, M satisfies Γ ⇒ ∆, ϕ → ψ. Since M was arbitrary,
Γ ⇒ ∆, ϕ → ψ is valid.

7. The last inference is ∀L: Then there is a formula ϕ( x ) and a closed term t
such that π ends in

190 Release: (None) ((None))


15.12. SOUNDNESS

ϕ ( t ), Γ ⇒ ∆
∀L
∀ x ϕ ( x ), Γ ⇒ ∆

We want to show that the conclusion ∀ x ϕ( x ), Γ ⇒ ∆ is valid. Consider


a structure M. Since the premise ϕ(t), Γ ⇒ ∆ is valid, (a) M 2 ϕ(t), (b)
M 2 χ for some χ ∈ Γ, or (c) M  χ for some χ ∈ ∆. In case (a), by
??, if M  ∀ x ϕ( x ), then M  ϕ(t). Since M 2 ϕ(t), M 2 ∀ x ϕ( x ) . In
case (b) and (c), M also satisfies ∀ x ϕ( x ), Γ ⇒ ∆. Since M was arbitrary,
∀ x ϕ( x ), Γ ⇒ ∆ is valid.

8. The last inference is ∃R: Exercise.

9. The last inference is ∀R: Then there is a formula ϕ( x ) and a constant


symbol a such that π ends in

Γ ⇒ ∆, ϕ( a)
∀R
Γ ⇒ ∆, ∀ x ϕ( x )

where the eigenvariable condition is satisfied, i.e., a does not occur in


ϕ( x ), Γ, or ∆. By induction hypothesis, the premise of the last inference
is valid. We have to show that the conclusion is valid as well, i.e., that
for any structure M, (a) M  ∀ x ϕ( x ), (b) M 2 χ for some χ ∈ Γ, or
(c) M  χ for some χ ∈ ∆.
Suppose M is an arbitrary structure. If (b) or (c) holds, we are done, so
suppose neither holds: for all χ ∈ Γ, M  χ, and for all χ ∈ ∆, M 2 χ.
We have to show that (a) holds, i.e., M  ∀ x ϕ( x ). By ??, if suffices
to show that M, s  ϕ( x ) for all variable assignments s. So let s be an
arbitrary variable assignment. Consider the structure M0 which is just
0
like M except aM = s( x ). By ??, for any χ ∈ Γ, M0  χ since a does
not occur in Γ, and for any χ ∈ ∆, M0 2 χ. But the premise is valid, so
M0  ϕ( a). By ??, M0 , s  ϕ( a), since ϕ( a) is a sentence. Now s ∼ x s
0
with s( x ) = ValM 0
s ( a ), since we’ve defined M in just this way. So ??
0
applies, and we get M , s  ϕ( x ). Since a does not occur in ϕ( x ), by
??, M, s  ϕ( x ). Since s was arbitrary, we’ve completed the proof that
M, s  ϕ( x ) for all variable assignments.

10. The last inference is ∃L: Exercise.

Now let’s consider the possible inferences with two premises.

Release: (None) ((None)) 191


CHAPTER 15. THE SEQUENT CALCULUS

1. The last inference is a cut: then π ends in

Γ ⇒ ∆, ϕ ϕ, Π ⇒ Λ
Cut
Γ, Π ⇒ ∆, Λ

Let M be a structure. By induction hypothesis, the premises are valid,


so M satisfies both premises. We distinguish two cases: (a) M 2 ϕ and
(b) M  ϕ. In case (a), in order for M to satisfy the left premise, it must
satisfy Γ ⇒ ∆. But then it also satisfies the conclusion. In case (b), in
order for M to satisfy the right premise, it must satisfy Π \ Λ. Again, M
satisfies the conclusion.

2. The last inference is ∧R. Then π ends in

Γ ⇒ ∆, ϕ Γ ⇒ ∆, ψ
∧R
Γ ⇒ ∆, ϕ ∧ ψ

Consider a structure M. If M satisfies Γ ⇒ ∆, we are done. So suppose


it doesn’t. Since Γ ⇒ ∆, ϕ is valid by induction hypothesis, M  ϕ.
Similarly, since Γ ⇒ ∆, ψ is valid, M  ψ. But then M  ϕ ∧ ψ.

3. The last inference is ∨L: Exercise.

4. The last inference is →L. Then π ends in

Γ ⇒ ∆, ϕ ψ, Π ⇒ Λ
→L
ϕ → ψ, Γ, Π ⇒ ∆, Λ

Again, consider a structure M and suppose M doesn’t satisfy Γ, Π ⇒


Λ, Π. We have to show that M 2 ϕ → ψ. If M doesn’t satisfy Γ, Π ⇒
Λ, Π, it satisfies neither Γ ⇒ ∆ nor Π ⇒ Λ. Since, Γ ⇒ ∆, ϕ is valid,
we have M  ϕ. Since ψ, Π ⇒ Λ is valid, we have M 2 ψ. But then
M 2 ϕ → ψ, which is what we wanted to show.

Corollary 15.29. If ` ϕ then ϕ is valid.

Corollary 15.30. If Γ ` ϕ then Γ  ϕ.

192 Release: (None) ((None))


15.13. DERIVATIONS WITH IDENTITY PREDICATE

Proof. If Γ ` ϕ then for some finite subset Γ0 ⊆ Γ, there is a derivation of


Γ0 ⇒ ϕ. By ??, every structure M either makes some ψ ∈ Γ0 false or makes ϕ
true. Hence, if M  Γ then also M  ϕ.

Corollary 15.31. If Γ is satisfiable, then it is consistent.

Proof. We prove the contrapositive. Suppose that Γ is not consistent. Then


there is a finite Γ0 ⊆ Γ and a derivation of Γ0 ⇒ . By ??, Γ0 ⇒ is valid. In
other words, for every structure M, there is χ ∈ Γ0 so that M 2 χ, and since
Γ0 ⊆ Γ, that χ is also in Γ. Thus, no M satisfies Γ, and Γ is not satisfiable.

15.13 Derivations with Identity predicate


Derivations with identity predicate require additional initial sequents and in-
ference rules.

Definition 15.32 (Initial sequents for =). If t is a closed term, then ⇒ t = t is


an initial sequent.

The rules for = are (t1 and t2 are closed terms):

t1 = t2 , Γ ⇒ ∆, ϕ(t1 ) t1 = t2 , Γ ⇒ ∆, ϕ(t2 )
= =
t1 = t2 , Γ ⇒ ∆, ϕ(t2 ) t1 = t2 , Γ ⇒ ∆, ϕ(t1 )

Example 15.33. If s and t are closed terms, then s = t, ϕ(s) ` ϕ(t):

ϕ(s) ⇒ ϕ(s)
WL
s = t, ϕ(s) ⇒ ϕ(s)
=
s = t, ϕ(s) ⇒ ϕ(t)

This may be familiar as the principle of substitutability of identicals, or Leib-


niz’ Law.
LK proves that = is symmetric and transitive:

t1 = t2 ⇒ t1 = t2
⇒ t1 = t1
WL t2 = t3 , t1 = t2 ⇒ t1 = t2 WL
t1 = t2 ⇒ t1 = t1 =
= t2 = t3 , t1 = t2 ⇒ t1 = t3
t1 = t2 ⇒ t2 = t1 XL
t1 = t2 , t2 = t3 ⇒ t1 = t3

In the proof on the left, the formula x = t1 is our ϕ( x ). On the right, we take
ϕ( x ) to be t1 = x.

Release: (None) ((None)) 193


CHAPTER 15. THE SEQUENT CALCULUS

15.14 Soundness with Identity predicate


Proposition 15.34. LK with initial sequents and rules for identity is sound.

Proof. Initial sequents of the form ⇒ t = t are valid, since for every struc-
ture M, M  t = t. (Note that we assume the term t to be closed, i.e., it
contains no variables, so variable assignments are irrelevant).
Suppose the last inference in a derivation is =. Then the premise is t1 =
t2 , Γ ⇒ ∆, ϕ(t1 ) and the conclusion is t1 = t2 , Γ ⇒ ∆, ϕ(t2 ). Consider a struc-
ture M. We need to show that the conclusion is valid, i.e., if M  t1 = t2 and
M  Γ, then either M  χ for some χ ∈ ∆ or M  ϕ(t2 ).
By induction hypothesis, the premise is valid. This means that if M 
t1 = t2 and M  Γ either (a) for some χ ∈ ∆, M  χ or (b) M  ϕ(t1 ). In
case (a) we are done. Consider case (b). Let s be a variable assignment with
s( x ) = ValM (t1 ). By ??, M, s  ϕ(t1 ). Since s ∼ x s, by ??, M, s  ϕ( x ). since
M  t1 = t2 , we have ValM (t1 ) = ValM (t2 ), and hence s( x ) = ValM (t2 ). By
applying ?? again, we also have M, s  ϕ(t2 ). By ??, M  ϕ(t2 ).

Problems
Problem 15.1. Give derivations of the following sequents:

1. ⇒ ¬( ϕ → ψ) → ( ϕ ∧ ¬ψ)

2. ( ϕ ∧ ψ) → χ ⇒ ( ϕ → χ) ∨ (ψ → χ)

Problem 15.2. Give derivations of the following sequents:

1. ∀ x ( ϕ( x ) → ψ) ⇒ (∃y ϕ(y) → ψ)

2. ∃ x ( ϕ( x ) → ∀y ϕ(y))

Problem 15.3. Prove ??

Problem 15.4. Prove that Γ ` ¬ ϕ iff Γ ∪ { ϕ} is inconsistent.

Problem 15.5. Complete the proof of ??.

Problem 15.6. Give derivations of the following sequents:

1. ⇒ ∀ x ∀y (( x = y ∧ ϕ( x )) → ϕ(y))

2. ∃ x ϕ( x ) ∧ ∀y ∀z (( ϕ(y) ∧ ϕ(z)) → y = z) ⇒ ∃ x ( ϕ( x ) ∧ ∀y ( ϕ(y) → y =


x ))

194 Release: (None) ((None))


Chapter 16

Natural Deduction

This chapter presents a natural deduction system in the style of


Gentzen/Prawitz.
To include or exclude material relevant to natural deduction as a proof
system, use the “prfND” tag.

16.1 Rules and Derivations


Natural deduction systems are meant to closely parallel the informal reason-
ing used in mathematical proof (hence it is somewhat “natural”). Natural
deduction proofs begin with assumptions. Inference rules are then applied.
Assumptions are “discharged” by the ¬Intro, →Intro, ∨Elim and ∃Elim in-
ference rules, and the label of the discharged assumption is placed beside the
inference for clarity.

Definition 16.1 (Initial Formula). An initial formula or assumption is any for-


mula in the topmost position of any branch.

Derivations in natural deduction are certain trees of sentences, where the


topmost sentences are assumptions, and if a sentence stands below one, two,
or three other sequents, it must follow correctly by a rule of inference. The sen-
tences at the top of the inference are called the premises and the sentence below
the conclusion of the inference. The rules come in pairs, an introduction and
an elimination rule for each logical operator. They introduce a logical opera-
tor in the conclusion or remove a logical operator from a premise of the rule.
Some of the rules allow an assumption of a certain type to be discharged. To
indicate which assumption is discharged by which inference, we also assign
labels to both the assumption and the inference. This is indicated by writing
the assumption as “[ ϕ]n ”.

195
CHAPTER 16. NATURAL DEDUCTION

It is customary to consider rules for all logical operators, even for those (if
any) that we consider as defined.

16.2 Propositional Rules

Rules for ∧

ϕ∧ψ
ϕ ∧Elim
ϕ ψ
∧Intro
ϕ∧ψ ϕ∧ψ
ψ
∧Elim

Rules for ∨

ϕ [ ϕ]n [ψ]n
∨Intro
ϕ∨ψ
ψ
∨Intro ϕ∨ψ χ χ
ϕ∨ψ n ∨Elim
χ

Rules for →

[ ϕ]n
ϕ→ψ ϕ
ψ
→Elim
ψ
n →Intro
ϕ→ψ

Rules for ¬

[ ϕ]n
¬ϕ ϕ
¬Elim


¬ ϕ ¬Intro
n

196 Release: (None) ((None))


16.3. QUANTIFIER RULES

Rules for ⊥

[¬ ϕ]n

⊥ ⊥
ϕ I

n
⊥ ⊥
ϕ C

Note that ¬Intro and ⊥C are very similar: The difference is that ¬Intro derives
a negated sentence ¬ ϕ but ⊥C a positive sentence ϕ.

16.3 Quantifier Rules

Rules for ∀

ϕ( a) ∀ x ϕ( x )
∀Intro ∀Elim
∀ x ϕ( x ) ϕ(t)

In the rules for ∀, t is a ground term (a term that does not contain any vari-
ables), and a is a constant symbol which does not occur in the conclusion ∀ x ϕ( x ),
or in any assumption which is undischarged in the derivation ending with the
premise ϕ( a). We call a the eigenvariable of the ∀Intro inference.

Rules for ∃

[ϕ( a)]n
ϕ(t)
∃Intro
∃ x ϕ( x )
∃ x ϕ( x ) χ
n
χ ∃Elim

Again, t is a ground term, and a is a constant which does not occur in the
premise ∃ x ϕ( x ), in the conclusion χ, or any assumption which is undischarged
in the derivations ending with the two premises (other than the assumptions
ϕ( a)). We call a the eigenvariable of the ∃Elim inference.
The condition that an eigenvariable neither occur in the premises nor in
any assumption that is undischarged in the derivations leading to the premises
for the ∀Intro or ∃Elim inference is called the eigenvariable condition.

Release: (None) ((None)) 197


CHAPTER 16. NATURAL DEDUCTION

We use the term “eigenvariable” even though a in the above rules is a


constant. This has historical reasons.
In ∃Intro and ∀Elim there are no restrictions, and the term t can be any-
thing, so we do not have to worry about any conditions. On the other hand,
in the ∃Elim and ∀Intro rules, the eigenvariable condition requires that the
constant symbol a does not occur anywhere in the conclusion or in an undis-
charged assumption. The condition is necessary to ensure that the system
is sound, i.e., only derives sentences from undischarged assumptions from
which the follow. Without this condition, the following would be allowed:

[ ϕ( a)]1
*∀Intro
∃ x ϕ( x ) ∀ x ϕ( x )
∃Elim
∀ x ϕ( x )
However, ∃ x ϕ( x ) 2 ∀ x ϕ( x ).

16.4 Derivations
We’ve said what an assumption is, and we’ve given the rules of inference.
Derivations in natural deduction are inductively generated from these: each
derivation either is an assumption on its own, or consists of one, two, or three
derivations followed by a correct inference.

Definition 16.2 (Derivation). A derivation of a sentence ϕ from assumptions Γ


is a tree of sentences satisfying the following conditions:

1. The topmost sentences of the tree are either in Γ or are discharged by an


inference in the tree.

2. The bottommost sentence of the tree is ϕ.

3. Every sentence in the tree except ϕ is a premise of a correct application of


am inference rule whose conclusion stands directly below that sentence
in the tree.

We then say that ϕ is the conclusion of the derivation and that ϕ is derivable
from Γ.

Example 16.3. Every assumption on its own is a derivation. So, e.g., χ by itself
is a derivation, and so is θ by itself. We can obtain a new derivation from these
by applying, say, the ∧Intro rule,

ϕ ψ
∧Intro
ϕ∧ψ
These rules are meant to be general: we can replace the ϕ and ψ in it with any
sentences, e.g., by χ and θ. Then the conclusion would be χ ∧ θ, and so

198 Release: (None) ((None))


16.5. EXAMPLES OF DERIVATIONS

χ θ
∧Intro
χ∧θ
is a correct derivation. Of course, we can also switch the assumptions, so that
θ plays the role of ϕ and χ that of ψ. Thus,

θ χ
∧Intro
θ∧χ
is also a correct derivation.
We can now apply another rule, say, →Intro, which allows us to conclude
a conditional and allows us to discharge any assumption that is identical to
the conclusion of that conditional. So both of the following would be correct
derivations:

[ χ ]1 θ χ [ θ ]1
∧Intro ∧Intro
χ∧θ χ∧θ
1 →Intro 1 →Intro
χ → (χ ∧ θ ) θ → (χ ∧ θ )

16.5 Examples of Derivations


Example 16.4. Let’s give a derivation of the sentence ( ϕ ∧ ψ) → ϕ.
We begin by writing the desired conclusion at the bottom of the derivation.

( ϕ ∧ ψ) → ϕ
Next, we need to figure out what kind of inference could result in a sen-
tence of this form. The main operator of the conclusion is →, so we’ll try to
arrive at the conclusion using the →Intro rule. It is best to write down the as-
sumptions involved and label the inference rules as you progress, so it is easy
to see whether all assumptions have been discharged at the end of the proof.

[ ϕ ∧ ψ ]1

ϕ
1 →Intro
( ϕ ∧ ψ) → ϕ
We now need to fill in the steps from the assumption ϕ ∧ ψ to ϕ. Since we
only have one connective to deal with, ∧, we must use the ∧ elim rule. This
gives us the following proof:

[ ϕ ∧ ψ ]1
ϕ ∧Elim
1 →Intro
( ϕ ∧ ψ) → ϕ
We now have a correct derivation of ( ϕ ∧ ψ) → ϕ.

Release: (None) ((None)) 199


CHAPTER 16. NATURAL DEDUCTION

Example 16.5. Now let’s give a derivation of (¬ ϕ ∨ ψ) → ( ϕ → ψ).


We begin by writing the desired conclusion at the bottom of the derivation.

(¬ ϕ ∨ ψ) → ( ϕ → ψ)

To find a logical rule that could give us this conclusion, we look at the logical
connectives in the conclusion: ¬, ∨, and →. We only care at the moment about
the first occurence of → because it is the main operator of the sentence in the
end-sequent, while ¬, ∨ and the second occurence of → are inside the scope
of another connective, so we will take care of those later. We therefore start
with the →Intro rule. A correct application must look as follows:

[¬ ϕ ∨ ψ]1

ϕ→ψ
1 →Intro
(¬ ϕ ∨ ψ) → ( ϕ → ψ)

This leaves us with two possibilities to continue. Either we can keep work-
ing from the bottom up and look for another application of the →Intro rule, or
we can work from the top down and apply a ∨Elim rule. Let us apply the lat-
ter. We will use the assumption ¬ ϕ ∨ ψ as the leftmost premise of ∨Elim. For
a valid application of ∨Elim, the other two premises must be identical to the
conclusion ϕ → ψ, but each may be derived in turn from another assumption,
namely the two disjuncts of ¬ ϕ ∨ ψ. So our derivation will look like this:

[¬ ϕ]2 [ ψ ]2

[¬ ϕ ∨ ψ]1 ϕ→ψ ϕ→ψ


2
ϕ→ψ
∨Elim
1 →Intro
(¬ ϕ ∨ ψ) → ( ϕ → ψ)

In each of the two branches on the right, we want to derive ϕ → ψ, which


is best done using →Intro.

[¬ ϕ]2 , [ ϕ]3 [ ψ ]2 , [ ϕ ]4

ψ ψ
3 →Intro 4 →Intro
[¬ ϕ ∨ ψ]1 ϕ→ψ ϕ→ψ
2
ϕ→ψ
∨Elim
1 →Intro
(¬ ϕ ∨ ψ) → ( ϕ → ψ)

200 Release: (None) ((None))


16.5. EXAMPLES OF DERIVATIONS

For the two missing parts of the derivation, we need derivations of ψ from
¬ ϕ and ϕ in the middle, and from ϕ and ψ on the left. Let’s take the former
first. ¬ ϕ and ϕ are the two premises of ¬Elim:

[¬ ϕ]2 [ ϕ ]3
¬Elim

ψ
By using ⊥ I , we can obtain ψ as a conclusion and complete the branch.

[ ψ ]2 , [ ϕ ]4
[¬ ϕ]2 [ ϕ ]3
⊥Intro
⊥ ⊥
I
ψ ψ
3 →Intro 4 →Intro
[¬ ϕ ∨ ψ]1 ϕ→ψ ϕ→ψ
2
ϕ→ψ
∨Elim
1 →Intro
(¬ ϕ ∨ ψ) → ( ϕ → ψ)
Let’s now look at the rightmost branch. Here it’s important to realize that
the definition of derivation allows assumptions to be discharged but does not re-
quire them to be. In other words, if we can derive ψ from one of the assump-
tions ϕ and ψ without using the other, that’s ok. And to derive ψ from ψ is
trivial: ψ by itself is such a derivation, and no inferences are needed. So we
can simply delete the assumtion ϕ.

[¬ ϕ]2 [ ϕ ]3
¬Elim
⊥ ⊥
I
ψ [ ψ ]2
3 →Intro →Intro
[¬ ϕ ∨ ψ]1 ϕ→ψ ϕ→ψ
2
ϕ→ψ
∨Elim
1 →Intro
(¬ ϕ ∨ ψ) → ( ϕ → ψ)
Note that in the finished derivation, the rightmost →Intro inference does not
actually discharge any assumptions.

Example 16.6. So far we have not needed the ⊥C rule. It is special in that it al-
lows us to discharge an assumption that isn’t a sub-formula of the conclusion
of the rule. It is closely related to the ⊥ I rule. In fact, the ⊥ I rule is a special
case of the ⊥C rule—there is a logic called “intuitionistic logic” in which only
⊥ I is allowed. The ⊥C rule is a last resort when nothing else works. For in-
stance, suppose we want to derive ϕ ∨ ¬ ϕ. Our usual strategy would be to
attempt to derive ϕ ∨ ¬ ϕ using ∨Intro. But this would require us to derive
either ϕ or ¬ ϕ from no assumptions, and this can’t be done. ⊥C to the rescue!

Release: (None) ((None)) 201


CHAPTER 16. NATURAL DEDUCTION

[¬( ϕ ∨ ¬ ϕ)]1

1
⊥ ⊥C
ϕ ∨ ¬ϕ

Now we’re looking for a derivation of ⊥ from ¬( ϕ ∨ ¬ ϕ). Since ⊥ is the


conclusion of ¬Elim we might try that:

[¬( ϕ ∨ ¬ ϕ)]1 [¬( ϕ ∨ ¬ ϕ)]1

¬ϕ ϕ
¬Elim
1
⊥ ⊥C
ϕ ∨ ¬ϕ
Our strategy for finding a derivation of ¬ ϕ calls for an application of ¬Intro:

[¬( ϕ ∨ ¬ ϕ)]1 , [ ϕ]2


[¬( ϕ ∨ ¬ ϕ)]1


2
¬ ϕ ¬Intro ϕ
¬Elim
1
⊥ ⊥C
ϕ ∨ ¬ϕ

Here, we can get ⊥ easily by applying ¬Elim to the assumption ¬( ϕ ∨ ¬ ϕ)


and ϕ ∨ ¬ ϕ which follows from our new assumption ϕ by ∨Intro:

[ ϕ ]2 [¬( ϕ ∨ ¬ ϕ)]1
[¬( ϕ ∨ ¬ ϕ)]1 ϕ ∨ ¬ ϕ ∨Intro
¬Elim

2
¬ϕ ¬ Intro ϕ
¬Elim
1
⊥ ⊥C
ϕ ∨ ¬ϕ
On the right side we use the same strategy, except we get ϕ by ⊥C :

[ ϕ ]2 [¬ ϕ]3
[¬( ϕ ∨ ¬ ϕ)]1 ϕ ∨ ¬ϕ ∨ Intro [¬( ϕ ∨ ¬ ϕ)] 1 ϕ ∨ ¬ ϕ ∨Intro
¬Elim ¬Elim
⊥ ⊥ ⊥
2
¬ϕ ¬ Intro 3
ϕ C
¬Elim
1
⊥ ⊥C
ϕ ∨ ¬ϕ

202 Release: (None) ((None))


16.6. DERIVATIONS WITH QUANTIFIERS

16.6 Derivations with Quantifiers


Example 16.7. When dealing with quantifiers, we have to make sure not to
violate the eigenvariable condition, and sometimes this requires us to play
around with the order of carrying out certain inferences. In general, it helps
to try and take care of rules subject to the eigenvariable condition first (they
will be lower down in the finished proof).
Let’s see how we’d give a derivation of the formula ∃ x ¬ ϕ( x ) → ¬∀ x ϕ( x ).
Starting as usual, we write

∃ x ¬ ϕ( x ) → ¬∀ x ϕ( x )
We start by writing down what it would take to justify that last step using the
→Intro rule.
[∃ x ¬ ϕ( x )]1

¬∀ x ϕ( x )
→Intro
∃ x ¬ ϕ( x ) → ¬∀ x ϕ( x )
Since there is no obvious rule to apply to ¬∀ x ϕ( x ), we will proceed by setting
up the derivation so we can use the ∃Elim rule. Here we must pay attention
to the eigenvariable condition, and choose a constant that does not appear in
∃ x ϕ( x ) or any assumptions that it depends on. (Since no constant symbols
appear, however, any choice will do fine.)
[¬ ϕ( a)]2

[∃ x ¬ ϕ( x )]1 ¬∀ x ϕ( x )
2 ∃Elim
¬∀ x ϕ( x )
→Intro
∃ x ¬ ϕ( x ) → ¬∀ x ϕ( x )
In order to derive ¬∀ x ϕ( x ), we will attempt to use the ¬Intro rule: this
requires that we derive a contradiction, possibly using ∀ x ϕ( x ) as an addi-
tional assumption. Of coursem, this contradiction may involve the assump-
tion ¬ ϕ( a) which will be discharged by the →Intro inference. We can set it up
as follows:
[¬ ϕ( a)]2 , [∀ x ϕ( x )]3


3 ¬Intro
[∃ x ¬ ϕ( x )]1 ¬∀ x ϕ( x )
2 ∃Elim
¬∀ x ϕ( x )
→Intro
∃ x ¬ ϕ( x ) → ¬∀ x ϕ( x )

Release: (None) ((None)) 203


CHAPTER 16. NATURAL DEDUCTION

It looks like we are close to getting a contradiction. The easiest rule to apply is
the ∀Elim, which has no eigenvariable conditions. Since we can use any term
we want to replace the universally quantified x, it makes the most sense to
continue using a so we can reach a contradiction.

[∀ x ϕ( x )]3
∀Elim
[¬ ϕ( a)]2 ϕ( a)
¬Elim

1
3 ¬Intro
[∃ x ¬ ϕ( x )] ¬∀ x ϕ( x )
2 ∃Elim
¬∀ x ϕ( x )
→Intro
∃ x ¬ ϕ( x ) → ¬∀ x ϕ( x )

It is important, especially when dealing with quantifiers, to double check


at this point that the eigenvariable condition has not been violated. Since the
only rule we applied that is subject to the eigenvariable condition was ∃Elim,
and the eigenvariable a does not occur in any assumptions it depends on, this
is a correct derivation.

Example 16.8. Sometimes we may derive a formula from other formulas. In


these cases, we may have undischarged assumptions. It is important to keep
track of our assumptions as well as the end goal.
Let’s see how we’d give a derivation of the formula ∃ x χ( x, b) from the
assumptions ∃ x ( ϕ( x ) ∧ ψ( x )) and ∀ x (ψ( x ) → χ( x, b). Starting as usual, we
write the conclusion at the bottom.

∃ x χ( x, b)

We have two premises to work with. To use the first, i.e., try to find a
derivation of ∃ x χ( x, b) from ∃ x ( ϕ( x ) ∧ ψ( x )) we would use the ∃Elim rule.
Since it has an eigenvariable condition, we will apply that rule first. We get
the following:

[ ϕ( a) ∧ ψ( a)]1

∃ x ( ϕ( x ) ∧ ψ( x )) ∃ x χ( x, b)
1 ∃Elim
∃ x χ( x, b)

The two assumptions we are working with share ψ. It may be useful at this

204 Release: (None) ((None))


16.6. DERIVATIONS WITH QUANTIFIERS

point to apply ∧Elim to separate out ψ( a).

[ ϕ( a) ∧ ψ( a)]1
∧Elim
ψ( a)

∃ x ( ϕ( x ) ∧ ψ( x )) ∃ x χ( x, b)
1 ∃Elim
∃ x χ( x, b)

The second assumption we have to work with is ∀ x (ψ( x ) → χ( x, b). Since


there is no eigenvariable condition we can instantiate x with the constant sym-
bol a using ∀Elim to get ψ( a) → χ( a, b). We now have both ψ( a) → χ( a, b) and
ψ( a). Our next move should be a straightforward application of the →Elim
rule.

∀ x (ψ( x ) → χ( x, b)) [ ϕ( a) ∧ ψ( a)]1


∀Elim ∧Elim
ψ( a) → χ( a, b) ψ( a)
→Elim
χ( a, b)

∃ x ( ϕ( x ) ∧ ψ( x )) ∃ x χ( x, b)
1 ∃Elim
∃ x χ( x, b)

We are so close! One application of ∃Intro and we have reached our goal.

∀ x (ψ( x ) → χ( x, b)) [ ϕ( a) ∧ ψ( a)]1


∀Elim ∧Elim
ψ( a) → χ( a, b) ψ( a)
→Elim
χ( a, b)
∃Intro
∃ x ( ϕ( x ) ∧ ψ( x )) ∃ x χ( x, b)
1 ∃Elim
∃ x χ( x, b)

Since we ensured at each step that the eigenvariable conditions were not vio-
lated, we can be confident that this is a correct derivation.

Example 16.9. Give a derivation of the formula ¬∀ x ϕ( x ) from the assump-


tions ∀ x ϕ( x ) → ∃y ψ(y) and ¬∃y ψ(y). Starting as usual, we write the target
formula at the bottom.

¬∀ x ϕ( x )

The last line of the derivation is a negation, so let’s try using ¬Intro. This will

Release: (None) ((None)) 205


CHAPTER 16. NATURAL DEDUCTION

require that we figure out how to derive a contradiction.

[∀ x ϕ( x )]1


1 ¬Intro
¬∀ x ϕ( x )

So far so good. We can use ∀Elim but it’s not obvious if that will help us
get to our goal. Instead, let’s use one of our assumptions. ∀ x ϕ( x ) → ∃y ψ(y)
together with ∀ x ϕ( x ) will allow us to use the →Elim rule.

∀ x ϕ( x ) → ∃y ψ(y) [∀ x ϕ( x )]1
→Elim
∃y ψ(y)


1 ¬Intro
¬∀ x ϕ( x )

We now have one final assumption to work with, and it looks like this will
help us reach a contradiction by using ¬Elim.

∀ x ϕ( x ) → ∃y ψ(y) [∀ x ϕ( x )]1
→Elim
¬∃y ψ(y) ∃y ψ(y)
¬Elim

1 ¬Intro
¬∀ x ϕ( x )

16.7 Proof-Theoretic Notions

This section collects the definitions the provability relation and consis-
tency for natural deduction.

Just as we’ve defined a number of important semantic notions (validity, entail-


ment, satisfiabilty), we now define corresponding proof-theoretic notions. These
are not defined by appeal to satisfaction of sentences in structures, but by ap-
peal to the derivability or non-derivability of certain sentences from others. It
was an important discovery that these notions coincide. That they do is the
content of the soundness and completeness theorems.

Definition 16.10 (Theorems). A sentence ϕ is a theorem if there is a derivation


of ϕ in natural deduction in which all assumptions are discharged. We write
` ϕ if ϕ is a theorem and 0 ϕ if it is not.

206 Release: (None) ((None))


16.7. PROOF-THEORETIC NOTIONS

Definition 16.11 (Derivability). A sentence ϕ is derivable from a set of sen-


tences Γ, Γ ` ϕ, if there is a derivation with conclusion ϕ and in which every
assumption is either discharged or is in Γ. If ϕ is not derivable from Γ we
write Γ 0 ϕ.

Definition 16.12 (Consistency). A set of sentences Γ is inconsistent iff Γ ` ⊥.


If Γ is not inconsistent, i.e., if Γ 0 ⊥, we say it is consistent.

Proposition 16.13 (Reflexivity). If ϕ ∈ Γ, then Γ ` ϕ.

Proof. The assumption ϕ by itself is a derivation of ϕ where every undis-


charged assumption (i.e., ϕ) is in Γ.

Proposition 16.14 (Monotony). If Γ ⊆ ∆ and Γ ` ϕ, then ∆ ` ϕ.

Proof. Any derivation of ϕ from Γ is also a derivation of ϕ from ∆.

Proposition 16.15 (Transitivity). If Γ ` ϕ and { ϕ} ∪ ∆ ` ψ, then Γ ∪ ∆ ` ψ.

Proof. If Γ ` ϕ, there is a derivation δ0 of ϕ with all undischarged assumptions


in Γ. If { ϕ} ∪ ∆ ` ψ, then there is a derivation δ1 of ψ with all undischarged
assumptions in { ϕ} ∪ ∆. Now consider:

∆, [ ϕ]1

δ1 Γ
δ0
ψ
1 →Intro
ϕ→ψ ϕ
ψ
→Elim

The undischarged assumptions are now all among Γ ∪ ∆, so this shows Γ ∪


∆ ` ψ.

Note that this means that in particular if Γ ` ϕ and ϕ ` ψ, then Γ ` ψ. It


follows also that if ϕ1 , . . . , ϕn ` ψ and Γ ` ϕi for each i, then Γ ` ψ.

Proposition 16.16. Γ is inconsistent iff Γ ` ϕ for every sentence ϕ.

Proof. Exercise.

Proposition 16.17 (Compactness). 1. If Γ ` ϕ then there is a finite subset


Γ0 ⊆ Γ such that Γ0 ` ϕ.

2. If every finite subset of Γ is consistent, then Γ is consistent.

Proof. 1. If Γ ` ϕ, then there is a derivation δ of ϕ from Γ. Let Γ0 be the set


of undischarged assumptions of δ. Since any derivation is finite, Γ0 can
only contain finitely many sentences. So, δ is a derivation of ϕ from a
finite Γ0 ⊆ Γ.

Release: (None) ((None)) 207


CHAPTER 16. NATURAL DEDUCTION

2. This is the contrapositive of (1) for the special case ϕ ≡ ⊥.

16.8 Derivability and Consistency


We will now establish a number of properties of the derivability relation. They
are independently interesting, but each will play a role in the proof of the
completeness theorem.

Proposition 16.18. If Γ ` ϕ and Γ ∪ { ϕ} is inconsistent, then Γ is inconsistent.

Proof. Let the derivation of ϕ from Γ be δ1 and the derivation of ⊥ from Γ ∪


{ ϕ} be δ2 . We can then derive:

Γ, [ ϕ]1
Γ
δ2
δ1

¬ ϕ ¬Intro
1
ϕ
¬Elim

In the new derivation, the assumption ϕ is discharged, so it is a derivation
from Γ.

Proposition 16.19. Γ ` ϕ iff Γ ∪ {¬ ϕ} is inconsistent.

Proof. First suppose Γ ` ϕ, i.e., there is a derivation δ0 of ϕ from undischarged


assumptions Γ. We obtain a derivation of ⊥ from Γ ∪ {¬ ϕ} as follows:

Γ
δ0
¬ϕ ϕ
¬Elim

Now assume Γ ∪ {¬ ϕ} is inconsistent, and let δ1 be the corresponding
derivation of ⊥ from undischarged assumptions in Γ ∪ {¬ ϕ}. We obtain
a derivation of ϕ from Γ alone by using ⊥C :

Γ, [¬ ϕ]1

δ1

⊥ ⊥
ϕ C

Proposition 16.20. If Γ ` ϕ and ¬ ϕ ∈ Γ, then Γ is inconsistent.

208 Release: (None) ((None))


16.9. DERIVABILITY AND THE PROPOSITIONAL CONNECTIVES

Proof. Suppose Γ ` ϕ and ¬ ϕ ∈ Γ. Then there is a derivation δ of ϕ from Γ.


Consider this simple application of the ¬Elim rule:

δ
¬ϕ ϕ
¬Elim

Since ¬ ϕ ∈ Γ, all undischarged assumptions are in Γ, this shows that Γ `
⊥.

Proposition 16.21. If Γ ∪ { ϕ} and Γ ∪ {¬ ϕ} are both inconsistent, then Γ is in-


consistent.

Proof. There are derivations δ1 and δ2 of ⊥ from Γ ∪ { ϕ} and ⊥ from Γ ∪ {¬ ϕ},


respectively. We can then derive

Γ, [¬ ϕ]2 Γ, [ ϕ]1

δ2 δ1

⊥ ⊥
¬¬ ϕ ¬Intro 1 ¬ ϕ ¬Intro
2
¬Elim

Since the assumptions ϕ and ¬ ϕ are discharged, this is a derivation of ⊥
from Γ alone. Hence Γ is inconsistent.

16.9 Derivability and the Propositional Connectives


Proposition 16.22. 1. Both ϕ ∧ ψ ` ϕ and ϕ ∧ ψ ` ψ

2. ϕ, ψ ` ϕ ∧ ψ.

Proof. 1. We can derive both

ϕ∧ψ ϕ∧ψ
ϕ ∧Elim ψ
∧Elim

2. We can derive:

ϕ ψ
∧Intro
ϕ∧ψ

Proposition 16.23. 1. ϕ ∨ ψ, ¬ ϕ, ¬ψ is inconsistent.

Release: (None) ((None)) 209


CHAPTER 16. NATURAL DEDUCTION

2. Both ϕ ` ϕ ∨ ψ and ψ ` ϕ ∨ ψ.

Proof. 1. Consider the following derivation:

¬ϕ [ ϕ ]1 ¬ψ [ ψ ]1
¬Elim ¬Elim
ϕ∨ψ ⊥ ⊥
1 ∨Elim

This is a derivation of ⊥ from undischarged assumptions ϕ ∨ ψ, ¬ ϕ, and


¬ψ.

2. We can derive both

ϕ ψ
∨Intro ∨Intro
ϕ∨ψ ϕ∨ψ

Proposition 16.24. 1. ϕ, ϕ → ψ ` ψ.

2. Both ¬ ϕ ` ϕ → ψ and ψ ` ϕ → ψ.

Proof. 1. We can derive:

ϕ→ψ ψ
ψ
→Elim

2. This is shown by the following two derivations:

¬ϕ [ ϕ ]1
¬Elim
⊥ ⊥ ψ
I →Intro
ψ ϕ→ψ
1 →Intro
ϕ→ψ

Note that →Intro may, but does not have to, discharge the assumption ϕ.

210 Release: (None) ((None))


16.10. DERIVABILITY AND THE QUANTIFIERS

16.10 Derivability and the Quantifiers


Theorem 16.25. If c is a constant not occurring in Γ or ϕ( x ) and Γ ` ϕ(c), then
Γ ` ∀ x ϕ ( x ).

Proof. Let δ be a derivation of ϕ(c) from Γ. By adding a ∀Intro inference, we


obtain a proof of ∀ x ϕ( x ). Since c does not occur in Γ or ϕ( x ), the eigenvariable
condition is satisfied.

Proposition 16.26. 1. ϕ(t) ` ∃ x ϕ( x ).

2. ∀ x ϕ( x ) ` ϕ(t).

Proof. 1. The following is a derivation of ∃ x ϕ( x ) from ϕ(t):

ϕ(t)
∃Intro
∃ x ϕ( x )

2. The following is a derivation of ϕ(t) from ∀ x ϕ( x ):

∀ x ϕ( x )
∀Elim
ϕ(t)

16.11 Soundness
A derivation system, such as natural deduction, is sound if it cannot derive
things that do not actually follow. Soundness is thus a kind of guaranteed
safety property for derivation systems. Depending on which proof theoretic
property is in question, we would like to know for instance, that

1. every derivable sentence is valid;

2. if a sentence is derivable from some others, it is also a consequence of


them;

3. if a set of sentences is inconsistent, it is unsatisfiable.

These are important properties of a derivation system. If any of them do not


hold, the derivation system is deficient—it would derive too much. Conse-
quently, establishing the soundness of a derivation system is of the utmost
importance.

Theorem 16.27 (Soundness). If ϕ is derivable from the undischarged assumptions


Γ, then Γ  ϕ.

Release: (None) ((None)) 211


CHAPTER 16. NATURAL DEDUCTION

Proof. Let δ be a derivation of ϕ. We proceed by induction on the number of


inferences in δ.
For the induction basis we show the claim if the number of inferences is 0.
In this case, δ consists only of an initial formula. Every initial formula ϕ is
an undischarged assumption, and as such, any structure M that satisfies all of
the undischarged assumptions of the proof also satisfies ϕ.
Now for the inductive step. Suppose that δ contains n inferences. The
premise(s) of the lowermost inference are derived using sub-derivations, each
of which contains fewer than n inferences. We assume the induction hypothe-
sis: The premises of the last inference follow from the undischarged assump-
tions of the sub-derivations ending in those premises. We have to show that
ϕ follows from the undischarged assumptions of the entire proof.
We distinguish cases according to the type of the lowermost inference.
First, we consider the possible inferences with only one premise.

1. Suppose that the last inference is ¬Intro: The derivation has the form

Γ, [ ϕ]n

δ1


¬ ϕ ¬Intro
n

By inductive hypothesis, ⊥ follows from the undischarged assumptions


Γ ∪ { ϕ} of δ1 . Consider a structure M. We need to show that, if M  Γ,
then M  ¬ ϕ. Suppose for reductio that M  Γ, but M 2 ¬ ϕ, i.e., M  ϕ.
This would mean that M  Γ ∪ { ϕ}. This is contrary to our inductive
hypothesis. So, M  ¬ ϕ.

2. The last inference is ∧Elim: There are two variants: ϕ or ψ may be in-
ferred from the premise ϕ ∧ ψ. Consider the first case. The derivation δ
looks like this:

Γ
δ1

ϕ∧ψ
ϕ ∧Elim

By inductive hypothesis, ϕ ∧ ψ follows from the undischarged assump-


tions Γ of δ1 . Consider a structure M. We need to show that, if M  Γ,
then M  ϕ. Suppose M  Γ. By our inductive hypothesis (Γ  ϕ ∨ ψ),
we know that M  ϕ ∧ ψ. By definition, M  ϕ ∧ ψ iff M  ϕ and M  ψ.
(The case where ψ is inferred from ϕ ∧ ψ is handled similarly.)

212 Release: (None) ((None))


16.11. SOUNDNESS

3. The last inference is ∨Intro: There are two variants: ϕ ∨ ψ may be in-
ferred from the premise ϕ or the premise ψ. Consider the first case. The
derivation has the form

Γ
δ1
ϕ
∨Intro
ϕ∨ψ

By inductive hypothesis, ϕ follows from the undischarged assumptions Γ


of δ1 . Consider a structure M. We need to show that, if M  Γ, then
M  ϕ ∨ ψ. Suppose M  Γ; then M  ϕ since Γ  ϕ (the inductive
hypothesis). So it must also be the case that M  ϕ ∨ ψ. (The case where
ϕ ∨ ψ is inferred from ψ is handled similarly.)

4. The last inference is →Intro: ϕ → ψ is inferred from a subproof with


assumption ϕ and conclusion ψ, i.e.,

Γ, [ ϕ]n

δ1

ψ
n →Intro
ϕ→ψ

By inductive hypothesis, ψ follows from the undischarged assumptions


of δ1 , i.e., Γ ∪ { ϕ}  ψ. Consider a structure M. The undischarged
assumptions of δ are just Γ, since ϕ is discharged at the last inference.
So we need to show that Γ  ϕ → ψ. For reductio, suppose that for
some structure M, M  Γ but M 2 ϕ → ψ. So, M  ϕ and M 2 ψ. But
by hypothesis, ψ is a consequence of Γ ∪ { ϕ}, i.e., M  ψ, which is a
contradiction. So, Γ  ϕ → ψ.

5. The last inference is ⊥ I : Here, δ ends in

Γ
δ1

⊥ ⊥
ϕ I

By induction hypothesis, Γ  ⊥. We have to show that Γ  ϕ. Suppose


not; then for some M we have M  Γ and M 2 ϕ. But we always
have M 2 ⊥, so this would mean that Γ 2 ⊥, contrary to the induction
hypothesis.

Release: (None) ((None)) 213


CHAPTER 16. NATURAL DEDUCTION

6. The last inference is ⊥C : Exercise.

7. The last inference is ∀Intro: Then δ has the form

Γ
δ1

ϕ( a)
∀Intro
∀ x ϕ( x )

The premise ϕ( a) is a consequence of the undischarged assumptions Γ


by induction hypothesis. Consider some structure, M, such that M  Γ.
We need to show that M  ∀ x ϕ( x ). Since ∀ x ϕ( x ) is a sentence, this
means we have to show that for every variable assignment s, M, s  ϕ( x )
(??). Since Γ consists entirely of sentences, M, s  ψ for all ψ ∈ Γ by ??.
0
Let M0 be like M except that aM = s( x ). Since a does not occur in Γ,
M0  Γ by ??. Since Γ  A( a), M0  A( a). Since ϕ( a) is a sentence,
M, s  ϕ( a) by ??. M0 , s  ϕ( x ) iff M0  ϕ( a) by ?? (recall that ϕ( a)
is just ϕ( x )[ a/x ]). So, M0 , s  ϕ( x ). Since a does not occur in ϕ( x ), by
??, M, s  ϕ( x ). But s was an arbitrary variable assignment, so M 
∀ x ϕ ( x ).
8. The last inference is ∃Intro: Exercise.

9. The last inference is ∀Elim: Exercise.

Now let’s consider the possible inferences with several premises: ∨Elim,
∧Intro, →Elim, and ∃Elim.
1. The last inference is ∧Intro. ϕ ∧ ψ is inferred from the premises ϕ and ψ
and δ has the form

Γ1 Γ2

δ1 δ2

ϕ ψ
∧Intro
ϕ∧ψ

By induction hypothesis, ϕ follows from the undischarged assumptions Γ1


of δ1 and ψ follows from the undischarged assumptions Γ2 of δ2 . The
undischarged assumptions of δ are Γ1 ∪ γ2 , so we have to show that
Γ1 ∪ Γ2  ϕ ∧ ψ. Consider a structure M with M  Γ1 ∪ Γ2 . Since M  Γ1 ,
it must be the case that M  ϕ as Γ1  ϕ, and since M  Γ2 , M  ψ since
Γ2  ψ. Together, M  ϕ ∧ ψ.

2. The last inference is ∨Elim: Exercise.

214 Release: (None) ((None))


16.12. DERIVATIONS WITH IDENTITY PREDICATE

3. The last inference is →Elim. ψ is inferred from the premises ϕ → ψ


and ϕ. The derivation δ looks like this:

Γ1 Γ2
δ1 δ2
ϕ→ψ ϕ
ψ
→Elim

By induction hypothesis, ϕ → ψ follows from the undischarged assump-


tions Γ1 of δ1 and ϕ follows from the undischarged assumptions Γ2 of δ2 .
Consider a structure M. We need to show that, if M  Γ1 ∪ Γ2 , then
M  ψ. Suppose M  Γ1 ∪ Γ2 . Since Γ1  ϕ → ψ, M  ϕ → ψ. Since
Γ2  ϕ, we have M  ϕ. This means that M  ψ (For if M 2 ψ, since
M  ϕ, we’d have M 2 ϕ → ψ, contradicting M  ϕ → ψ).

4. The last inference is ¬Elim: Exercise.

5. The last inference is ∃Elim: Exercise.

Corollary 16.28. If ` ϕ, then ϕ is valid.

Corollary 16.29. If Γ is satisfiable, then it is consistent.

Proof. We prove the contrapositive. Suppose that Γ is not consistent. Then


Γ ` ⊥, i.e., there is a derivation of ⊥ from undischarged assumptions in Γ.
By ??, any structure M that satisfies Γ must satisfy ⊥. Since M 2 ⊥ for every
structure M, no M can satisfy Γ, i.e., Γ is not satisfiable.

16.12 Derivations with Identity predicate


Derivations with identity predicate require additional inference rules.

t1 = t2 ϕ ( t1 )
=Elim
ϕ ( t2 )
=Intro
t=t
t1 = t2 ϕ ( t2 )
=Elim
ϕ ( t1 )

In the above rules, t, t1 , and t2 are closed terms. The =Intro rule allows us
to derive any identity statement of the form t = t outright, from no assump-
tions.

Release: (None) ((None)) 215


CHAPTER 16. NATURAL DEDUCTION

Example 16.30. If s and t are closed terms, then ϕ(s), s = t ` ϕ(t):


s=t ϕ(s)
=Elim
ϕ(t)
This may be familiar as the “principle of substitutability of identicals,” or Leib-
niz’ Law.
Example 16.31. We derive the sentence
∀ x ∀y (( ϕ( x ) ∧ ϕ(y)) → x = y)

from the sentence

∃ x ∀y ( ϕ(y) → y = x )
We develop the derivation backwards:

∃ x ∀y ( ϕ(y) → y = x ) [ ϕ( a) ∧ ϕ(b)]1

a=b
1 →Intro
(( ϕ( a) ∧ ϕ(b)) → a = b)
∀Intro
∀y (( ϕ( a) ∧ ϕ(y)) → a = y)
∀Intro
∀ x ∀y (( ϕ( x ) ∧ ϕ(y)) → x = y)
We’ll now have to use the main assumption: since it is an existential formula,
we use ∃Elim to derive the intermediary conclusion a = b.

[∀y ( ϕ(y) → y = c)]2


[ ϕ( a) ∧ ϕ(b)]1

∃ x ∀y ( ϕ(y) → y = x ) a=b
2 ∃Elim
a = b
1 →Intro
(( ϕ( a) ∧ ϕ(b)) → a = b)
∀Intro
∀y (( ϕ( a) ∧ ϕ(y)) → a = y)
∀Intro
∀ x ∀y (( ϕ( x ) ∧ ϕ(y)) → x = y)
The sub-derivation on the top right is completed by using its assumptions
to show that a = c and b = c. This requies two separate derivations. The
derivation for a = c is as follows:
[∀y ( ϕ(y) → y = c)]2 [ ϕ( a) ∧ ϕ(b)]1
∀Elim ∧Elim
ϕ( a) → a = c ϕ( a)
a=c →Elim
From a = c and b = c we derive a = b by =Elim.

216 Release: (None) ((None))


16.13. SOUNDNESS WITH IDENTITY PREDICATE

16.13 Soundness with Identity predicate


Proposition 16.32. Natural deduction with rules for = is sound.

Proof. Any formula of the form t = t is valid, since for every structure M,
M  t = t. (Note that we assume the term t to be ground, i.e., it contains no
variables, so variable assignments are irrelevant).
Suppose the last inference in a derivation is =Elim, i.e., the derivation has
the following form:

Γ1 Γ2

δ1 δ2

t1 = t2 ϕ ( t1 )
=Elim
ϕ ( t2 )
The premises t1 = t2 and ϕ(t1 ) are derived from undischarged assumptions Γ1
and Γ2 , respectively. We want to show that ϕ(t2 ) follows from Γ1 ∪ Γ2 . Con-
sider a structure M with M  Γ1 ∪ Γ2 . By induction hypothesis, M  ϕ(t1 )
and M  t1 = t2 . Therefore, ValM (t1 ) = ValM (t2 ). Let s be any variable
assignment, and s0 be the x-variant given by s0 ( x ) = ValM (t1 ) = ValM (t2 ).
By ??, M, s  ϕ(t1 ) iff M, s0  ϕ( x ) iff M, s  ϕ(t2 ). Since M  ϕ(t1 ), we have
M  ϕ ( t2 ).

Problems
Problem 16.1. Give derivations of the following:

1. ¬( ϕ → ψ) → ( ϕ ∧ ¬ψ)

2. ( ϕ → χ) ∨ (ψ → χ) from the assumption ( ϕ ∧ ψ) → χ

Problem 16.2. Give derivations of the following:

1. ∃y ϕ(y) → ψ from the assumption ∀ x ( ϕ( x ) → ψ)

2. ∃ x ( ϕ( x ) → ∀y ϕ(y))

Problem 16.3. Prove ??

Problem 16.4. Prove that Γ ` ¬ ϕ iff Γ ∪ { ϕ} is inconsistent.

Problem 16.5. Complete the proof of ??.

Problem 16.6. Prove that = is both symmetric and transitive, i.e., give deriva-
tions of ∀ x ∀y ( x = y → y = x ) and ∀ x ∀y ∀z(( x = y ∧ y = z) → x = z)

Problem 16.7. Give derivations of the following formulas:

Release: (None) ((None)) 217


CHAPTER 16. NATURAL DEDUCTION

1. ∀ x ∀y (( x = y ∧ ϕ( x )) → ϕ(y))

2. ∃ x ϕ( x ) ∧ ∀y ∀z (( ϕ(y) ∧ ϕ(z)) → y = z) → ∃ x ( ϕ( x ) ∧ ∀y ( ϕ(y) → y =


x ))

218 Release: (None) ((None))


Chapter 17

Tableaux

This chapter presents a signed analytic tableaux system.


To include or exclude material relevant to natural deduction as a proof
system, use the “prfTab” tag.

17.1 Rules and Tableaux


A tableau is a systematic survey of the possible ways a sentence can be true
or false in a structure. The bulding blocks of a tableau are signed formulas:
sentences plus a truth value “sign,” either T or F. These signed formulas are
arranged in a (downward growing) tree.

Definition 17.1. A signed formula is a pair consisting of a truth value and a sen-
tence, i.e., either:
T ϕ or F ϕ.

Intuitively, we might read T ϕ as “ϕ might be true” and F ϕ as “ϕ might be


false” (in some structure).
Each signed formula in the tree is either an assumption (which are listed at
the very top of the tree), or it is obtained from a signed formula above it by
one of a number of rules of inference. There are two rules for each possible
main operator of the preceding formula, one for the case when the sign is T,
and one for the case where the sign is F. Some rules allow the tree to branch,
and some only add signed formulas to the branch. A rule may be (and often
must be) applied not to the immediately preceding signed formula, but to any
signed formula in the branch from the root to the place the rule is applied.
A branch is closed when it contains both T ϕ and F ϕ. A closed tableau
is one where every branch is closed. Under the intuitive interpretation, any
branch describes a joint possibility, but T ϕ and F ϕ are not jointly possible. In
other words, if a branch is closed, the possibility it describes has been ruled

219
CHAPTER 17. TABLEAUX

out. In particular, that means that a closed tableau rules out all possibilities
of simultaneously making every assumption of the form T ϕ true and every
assumption of the form F ϕ false.
A closed tableau for ϕ is a closed tableau with root F ϕ. If such a closed
tableau exists, all possibilities for ϕ being false have been ruled out; i.e., ϕ
must be true in every structure.

17.2 Propositional Rules

Rules for ¬

T¬ ϕ F ¬ϕ
¬T ¬F
Fϕ Tϕ

Rules for ∧

Tϕ ∧ ψ
∧T Fϕ ∧ ψ
Tϕ ∧F
F ϕ | Fψ

Rules for ∨

Fϕ ∨ ψ
Tϕ ∨ ψ ∨F
∨T Fϕ
T ϕ | Tψ

Rules for →

Fϕ → ψ
Tϕ → ψ →F
→T Tϕ
F ϕ | Tψ

220 Release: (None) ((None))


17.3. QUANTIFIER RULES

The Cut Rule

Cut
Tϕ | Fϕ

The Cut rule is not applied “to” a previous signed formula; rather, it allows
every branch in a tableau to be split in two, one branch containing T ϕ, the
other F ϕ. It is not necessary—any set of signed formulas with a closed tableau
has one not using Cut—but it allows us to combine tableaux in a convenient
way.

17.3 Quantifier Rules

Rules for ∀

T ∀ x ϕ( x ) F ∀ x ϕ( x )
∀T ∀F
T ϕ(t) F ϕ( a)

In ∀T, t is a closed term (i.e., one without variables). In ∀F, a is a constant


symbol which must not occur anywhere in the branch above ∀F rule. We call
a the eigenvariable of the ∀F inference.

Rules for ∃

T ∃ x ϕ( x ) F ∃ x ϕ( x )
∃T ∃F
T ϕ( a) F ϕ(t)

Again, t is a closed term, and a is a constant symbol which does not occur in
the branch above the ∃F rule. We call a the eigenvariable of the ∃F inference.
The condition that an eigenvariable not occur in the branch above the ∀F
or ∃T inference is called the eigenvariable condition.
We use the term “eigenvariable” even though a in the above rules is a con-
stant symbol. This has historical reasons.
In ∀T and ∃F there are no restrictions on the term t. On the other hand,
in the ∃T and ∀F rules, the eigenvariable condition requires that the constant
symbol a does not occur anywhere in the branches above the respective infer-
ence. It is necessary to ensure that the system is sound. Without this condition,
the following would be a closed tableau for ∃ x ϕ( x ) → ∀ x ϕ( x ):

Release: (None) ((None)) 221


CHAPTER 17. TABLEAUX

1. F ∃ x ϕ( x ) → ∀ x ϕ( x ) Assumption
2. T ∃ x ϕ( x ) →F 1
3. F ∀ x ϕ( x ) →F 1
4. T ϕ( a) ∃T 2
5. F ϕ( a) ∀F 3

However, ∃ x ϕ( x ) → ∀ x ϕ( x ) is not valid.

17.4 Tableaux
We’ve said what an assumption is, and we’ve given the rules of inference.
Tableaux are inductively generated from these: each tableau either is a single
branch consisting of one or more assumptions, or it results from a tableau by
applying one of the rules of inference on a branch.

Definition 17.2 (Tableau). A tableau for assumptions S1ϕ1 , . . . , Snϕn (where


each Si is either T or F) is a tree of signed formulas satisfying the following
conditions:

1. The n topmost signed formulas of the tree are Si ϕi , one below the other.

2. Every signed formula in the tree that is not one of the assumptions re-
sults from a correct application of an inference rule to a signed formula
in the branch above it.

A branch of a tableau is closed iff it contains both T ϕ and F ϕ, and open other-
wise. A tableau in which every branch is closed is a closed tableau (for its set
of assumptions). If a tableau is not closed, i.e., if it contains at least one open
branch, it is open.

Example 17.3. Every set of assumptions on its own is a tableau, but it will gen-
erally not be closed. (Obviously, it is closed only if the assumptions already
contain a pair of signed formulas T ϕ and F ϕ.)
From a tableau (open or closed) we can obtain a new, larger one by ap-
plying one of the rules of inference to a signed formula ϕ in it. The rule will
append one or more signed formulas to the end of any branch containing the
occurrence of ϕ to which we apply the rule.
For instance, consider the assumption T ϕ ∧ ¬ ϕ. Here is the (open) tableau
consisting of just that assumption:

1. T ϕ ∧ ¬ϕ Assumption

We obtain a new tableau from it by applying the ∧T rule to the assumption.


That rule allows us to add two new lines to the tableau, T ϕ and T ¬ ϕ:

222 Release: (None) ((None))


17.5. EXAMPLES OF TABLEAUX

1. T ϕ ∧ ¬ϕ Assumption
2. Tϕ ∧T 1
3. T¬ ϕ ∧T 1

When we write down tableaux, we record the rules we’ve applied on the right
(e.g., ∧T1 means that the signed formula on that line is the result of applying
the ∧T rule to the signed formula on line 1). This new tableau now contains
additional signed formulas, but to only one (T ¬ ϕ) can we apply a rule (in this
case, the ¬T rule). This results in the closed tableau

1. T ϕ ∧ ¬ϕ Assumption
2. Tϕ ∧T 1
3. T¬ ϕ ∧T 1
4. Fϕ ¬T 3

17.5 Examples of Tableaux


Example 17.4. Let’s find a closed tableau for the sentence ( ϕ ∧ ψ) → ϕ.
We begin by writing the corresponding assumption at the top of the tableau.

1. F ( ϕ ∧ ψ) → ϕ Assumption

There is only one assumption, so only one signed formula to which we can
apply a rule. (For every signed formula, there is always at most one rule that
can be applied: it’s the rule for the corresponding sign and main operator of
the sentence.) In this case, this means, we must apply →F.

1. F ( ϕ ∧ ψ) → ϕ X Assumption
2. Tϕ ∧ ψ →F 1
3. Fϕ →F 1

To keep track of which signed formulas we have applied their corresponding


rules to, we write a checkmark next to the sentence. However, only write a
checkmark if the rule has been applied to all open branches. Once a signed
formula has had the corresponding rule applied in every open branch, we will
not have to return to it and apply the rule again. In this case, there is only one
branch, so the rule only has to be applied once. (Note that checkmarks are
only a convenience for constructing tableaux and are not officially part of the
syntax of tableaux.)
There is one new signed formula to which we can apply a rule: the T ϕ ∧ ψ
on line 3. Applying the ∧T rule results in:

Release: (None) ((None)) 223


CHAPTER 17. TABLEAUX

1. F ( ϕ ∧ ψ) → ϕ X Assumption
2. Tϕ ∧ ψ X →F 1
3. Fϕ →F 1
4. Tϕ ∧T 2
5. Tψ ∧T 2

Since the branch now contains both T ϕ (on line 4) and F ϕ (on line 3), the
branch is closed. Since it is the only branch, the tableau is closed. We have
found a closed tableau for ( ϕ ∧ ψ) → ϕ.

Example 17.5. Now let’s find a closed tableau for (¬ ϕ ∨ ψ) → ( ϕ → ψ).


We begin with the corresponding assumption:

1. F (¬ ϕ ∨ ψ) → ( ϕ → ψ) Assumption

The one signed formula in this tableau has main operator → and sign F, so
we apply the →F rule to it to obtain:

1. F (¬ ϕ ∨ ψ) → ( ϕ → ψ) X Assumption
2. T¬ ϕ ∨ ψ →F 1
3. F ( ϕ → ψ) →F 1

We now have a choice as to whether to apply ∨T to line 2 or →F to line 3. It


actually doesn’t matter which order we pick, as long as each signed formula
has its corresponding rule applied in every branch. So let’s pick the first one.
The ∨T rule allows the tableau to branch, and the two conclusions of the rule
will be the new signed formulas added to the two new branches. This results
in:

1. F (¬ ϕ ∨ ψ) → ( ϕ → ψ) X Assumption
2. T¬ ϕ ∨ ψ X →F 1
3. F ( ϕ → ψ) →F 1

4. T¬ ϕ Tψ ∨T 2

We have not applied the →F rule to line 3 yet: let’s do that now. To save
time, we apply it to both branches. Recall that we write a checkmark next
to a signed formula only if we have applied the corresponding rule in every
open branch. So it’s a good idea to apply a rule at the end of every branch that
contains the signed formula the rule applies to. That way we won’t have to
return to that signed formula lower down in the various branches.

224 Release: (None) ((None))


17.5. EXAMPLES OF TABLEAUX

1. F (¬ ϕ ∨ ψ) → ( ϕ → ψ) X Assumption
2. T¬ ϕ ∨ ψ X →F 1
3. F ( ϕ → ψ) X →F 1

4. T¬ ϕ Tψ ∨T 2
5. Tϕ Tϕ →F 3
6. Fψ Fψ →F 3

The right branch is now closed. On the left branch, we can still apply the ¬T
rule to line 4. This results in F ϕ and closes the left branch:

1. F (¬ ϕ ∨ ψ) → ( ϕ → ψ) X Assumption
2. T¬ ϕ ∨ ψ X →F 1
3. F ( ϕ → ψ) X →F 1

4. T¬ ϕ Tψ ∨T 2
5. Tϕ Tϕ →F 3
6. Fψ Fψ →F 3
7. Fϕ ⊗ ¬T 4

Example 17.6. We can give tableaux for any number of signed formulas as
assumptions. Often it is also necessary to apply more than one rule that allows
branching; and in general a tableau can have any number of branches. For
instance, consider a tableau for {T ϕ ∨ (ψ ∧ χ), F ( ϕ ∨ ψ) ∧ ( ϕ ∨ χ)}. We start
by applying the ∨T to the first assumption:

1. T ϕ ∨ (ψ ∧ χ) X Assumption
2. F ( ϕ ∨ ψ) ∧ ( ϕ ∨ χ) Assumption

3. Tϕ Tψ ∧ χ ∨T 1

Now we can apply the ∧F rule to line 2. We do this on both branches simul-
taneously, and can therefore check off line 2:

1. T ϕ ∨ (ψ ∧ χ) X Assumption
2. F ( ϕ ∨ ψ) ∧ ( ϕ ∨ χ) X Assumption

3. Tϕ Tψ ∧ χ ∨T 1

4. Fϕ ∨ ψ Fϕ ∨ χ Fϕ ∨ ψ Fϕ ∨ χ ∧F 2

Release: (None) ((None)) 225


CHAPTER 17. TABLEAUX

Now we can apply ∨F to all the branches containing ϕ ∨ ψ:

1. T ϕ ∨ (ψ ∧ χ) X Assumption
2. F ( ϕ ∨ ψ) ∧ ( ϕ ∨ χ) X Assumption

3. Tϕ Tψ ∧ χ ∨T 1

4. Fϕ ∨ ψ X Fϕ ∨ χ Fϕ ∨ ψ X Fϕ ∨ χ ∧F 2
5. Fϕ Fϕ ∨F 4
6. Fψ Fψ ∨F 4

The leftmost branch is now closed. Let’s now apply ∨F to ϕ ∨ χ:

1. T ϕ ∨ (ψ ∧ χ) X Assumption
2. F ( ϕ ∨ ψ) ∧ ( ϕ ∨ χ) X Assumption

3. Tϕ Tψ ∧ χ ∨T 1

4. Fϕ ∨ ψ X Fϕ ∨ χ X Fϕ ∨ ψ X Fϕ ∨ χ X ∧F 2
5. Fϕ Fϕ ∨F 4
6. Fψ Fψ ∨F 4
7. ⊗ Fϕ Fϕ ∨F 4
8. Fχ Fχ ∨F 4

Note that we moved the result of applying ∨F a second time below for clarity.
In this instance it would not have been needed, since the justifications would
have been the same.
Two branches remain open, and Tψ ∧ χ on line 3 remains unchecked. We
apply ∧T to it to obtain a closed tableau:

226 Release: (None) ((None))


17.6. TABLEAUX WITH QUANTIFIERS

1. T ϕ ∨ (ψ ∧ χ) X Assumption
2. F ( ϕ ∨ ψ) ∧ ( ϕ ∨ χ) X Assumption

3. Tϕ Tψ ∧ χ X ∨T 1

4. Fϕ ∨ ψ X Fϕ ∨ χ X Fϕ ∨ ψ X Fϕ ∨ χ X ∧F 2
5. Fϕ Fϕ Fϕ Fϕ ∨F 4
6. Fψ Fχ Fψ Fχ ∨F 4
7. ⊗ ⊗ Tψ Tψ ∧T 3
8. Tχ Tχ ∧T 3
⊗ ⊗

For comparison, here’s a closed tableau for the same set of assumptions in
which the rules are applied in a different order:

1. T ϕ ∨ (ψ ∧ χ) X Assumption
2. F ( ϕ ∨ ψ) ∧ ( ϕ ∨ χ) X Assumption

3. Fϕ ∨ ψ X Fϕ ∨ χ X ∧F 2
4. Fϕ Fϕ ∨F 3
5. Fψ Fχ ∨F 3

6. Tϕ Tψ ∧ χ X Tϕ Tψ ∧ χ X ∨T 1
7. ⊗ Tψ ⊗ Tψ ∧T 3
8. Tχ Tχ ∧T 3
⊗ ⊗

17.6 Tableaux with Quantifiers


Example 17.7. When dealing with quantifiers, we have to make sure not to
violate the eigenvariable condition, and sometimes this requires us to play
around with the order of carrying out certain inferences. In general, it helps
to try and take care of rules subject to the eigenvariable condition first (they
will be higher up in the finished tableau).
Let’s see how we’d give a tableau for the sentence ∃ x ¬ ϕ( x ) → ¬∀ x ϕ( x ).
Starting as usual, we start by recording the assumption,

1. F ∃ x ¬ ϕ( x ) → ¬∀ x ϕ( x ) Assumption

Since the main operator is →, we apply the →F:

Release: (None) ((None)) 227


CHAPTER 17. TABLEAUX

1. F ∃ x ¬ ϕ( x ) → ¬∀ x ϕ( x ) X Assumption
2. F ∃ x ¬ ϕ( x ) →F 1
3. F ¬∀ x ϕ( x ) →F 1

The next line to deal with is 2. We use ∃T. This requires a new constant
symbol; since no constant symbols yet occur, we can pick any one, say, a.

1. F ∃ x ¬ ϕ( x ) → ¬∀ x ϕ( x ) X Assumption
2. T ∃ x ¬ ϕ( x ) X →F 1
3. F ¬∀ x ϕ( x ) →F 1
4. T ¬ ϕ( a) ∃T 2

Now we apply ¬F to line 3:

1. F ∃ x ¬ ϕ( x ) → ¬∀ x ϕ( x ) X Assumption
2. T ∃ x ¬ ϕ( x ) X →F 1
3. F ¬∀ x ϕ( x ) X →F 1
4. T ¬ ϕ( a) ∃T 2
5. T ∀ x ϕ( x ) ¬F 3

We obtain a closed tableau by applying ¬T to line 4, followed by ∀T to line 5.

1. F ∃ x ¬ ϕ( x ) → ¬∀ x ϕ( x ) X Assumption
2. T ∃ x ¬ ϕ( x ) X →F 1
3. F ¬∀ x ϕ( x ) X →F 1
4. T ¬ ϕ( a) ∃T 2
5. T ∀ x ϕ( x ) ¬F 3
6. F ϕ( a) ¬T 4
7. T ϕ( a) ∀T 5

Example 17.8. Let’s see how we’d give a tableau for the set

F ∃ x χ( x, b), T ∃ x ( ϕ( x ) ∧ ψ( x )), T ∀ x (ψ( x ) → χ( x, b).

Starting as usual, we start with the assumptions:

1. F ∃ x χ( x, b) Assumption
2. T ∃ x ( ϕ( x ) ∧ ψ( x )) Assumption
3. T ∀ x (ψ( x ) → χ( x, b) Assumption

We should always apply a rule with the eigenvariable condition first; in this
case that would be ∃T to line 2. Since the assumptions contain the constant
symbol b, we have to use a different one; let’s pick a again.

228 Release: (None) ((None))


17.6. TABLEAUX WITH QUANTIFIERS

1. F ∃ x χ( x, b) Assumption
2. T ∃ x ( ϕ( x ) ∧ ψ( x )) X Assumption
3. T ∀ x (ψ( x ) → χ( x, b) Assumption
4. T ϕ( a) ∧ ψ( a) ∃T 2

If we now apply ∃F to line 1 or ∀T to line 3, we have to decide with term t to


substitute for x. Since there is no eigenvariable condition for these rules, we
can pick any term we like. In some cases we may even have to apply the rule
several times with different ts. But as a general rule, it pays to pick one of the
terms already occuring in the tableau—in this case, a and b—and in this case
we can guess that a will be more likely to result in a closed branch.

1. F ∃ x χ( x, b) Assumption
2. T ∃ x ( ϕ( x ) ∧ ψ( x )) X Assumption
3. T ∀ x (ψ( x ) → χ( x, b) Assumption
4. T ϕ( a) ∧ ψ( a) ∃T 2
5. F χ( a, b) ∃F 1
6. Tψ( a) → χ( a, b) ∀T 1

We don’t check the signed formulas in lines 1 and 3, since we may have to use
them again. Now apply ∧T to line 4:

1. F ∃ x χ( x, b) Assumption
2. T ∃ x ( ϕ( x ) ∧ ψ( x )) X Assumption
3. T ∀ x (ψ( x ) → χ( x, b) Assumption
4. T ϕ( a) ∧ ψ( a) X ∃T 2
5. F χ( a, b) ∃F 1
6. Tψ( a) → χ( a, b) ∀T 1
7. T ϕ( a) ∧T 4
8. Tψ( a) ∧T 4

If we now apply →T to line 5, the tableau closes:

1. F ∃ x χ( x, b) Assumption
2. T ∃ x ( ϕ( x ) ∧ ψ( x )) X Assumption
3. T ∀ x (ψ( x ) → χ( x, b) Assumption
4. T ϕ( a) ∧ ψ( a) X ∃T 2
5. F χ( a, b) ∃F 1
6. Tψ( a) → χ( a, b) X ∀T 1
7. T ϕ( a) ∧T 4
8. Tψ( a) ∧T 4

9. F ψ( a) Tχ( a, b) →T 6
⊗ ⊗

Release: (None) ((None)) 229


CHAPTER 17. TABLEAUX

Example 17.9. We construct a tableau for the set

T ∀ x ϕ( x ), T ∀ x ϕ( x ) → ∃y ψ(y), T ¬∃y ψ(y).

Starting as usual, we write down the assumptions:

1. T ∀ x ϕ( x ) Assumption
2. T ∀ x ϕ( x ) → ∃y ψ(y) Assumption
3. T ¬∃y ψ(y) Assumption

We begin by applying the ¬T rule to line 3. A corollary to the rule “always


apply rules with eigenvariable conditions first” is “defer applying quantifier
rules without eigenvariable conditions until needed.” Also, defer rules that
result in a split.

1. T ∀ x ϕ( x ) Assumption
2. T ∀ x ϕ( x ) → ∃y ψ(y) Assumption
3. T ¬∃y ψ(y) X Assumption
4. F ∃y ψ(y) ¬T 3

The new line 4 requires ∃F, a quantifier rule without the eigenvariable condi-
tion. So we defer this in favor of using →T on line 2.

1. T ∀ x ϕ( x ) Assumption
2. T ∀ x ϕ( x ) → ∃y ψ(y) X Assumption
3. T ¬∃y ψ(y) X Assumption
4. F ∃y ψ(y) ¬T 3

5. F ∀ x ϕ( x ) T ∃y ψ(y) →T 2

Both new signed formulas require rules with eigenvariable conditions, so these
should be next:

1. T ∀ x ϕ( x ) Assumption
2. T ∀ x ϕ( x ) → ∃y ψ(y) X Assumption
3. T ¬∃y ψ(y) X Assumption
4. F ∃y ψ(y) X ¬T 3

5. F ∀ x ϕ( x ) T ∃y ψ(y) →T 2
6. F ϕ(b) T ϕ(c) ∀F 5

To close the branches, we have to use the signed formulas on lines 1 and 3.
The corresponding rules (∀T and ∃F) don’t have eigenvariable conditions, so
we are free to pick whichever terms are suitable. In this case, that’s b and c,
respectively.

230 Release: (None) ((None))


17.7. PROOF-THEORETIC NOTIONS

1. T ∀ x ϕ( x ) Assumption
2. T ∀ x ϕ( x ) → ∃y ψ(y) X Assumption
3. T ¬∃y ψ(y) X Assumption
4. F ∃y ψ(y) X ¬T 3

5. F ∀ x ϕ( x ) T ∃y ψ(y) →T 2
6. F ϕ(b) Tψ(c) ∀F 5; ∃T 5
7. T ϕ(b) F ψ(c) ∀T 1; ∃F 4

17.7 Proof-Theoretic Notions

This section collects the definitions of the provability relation and con-
sistency for tableaux.

Just as we’ve defined a number of important semantic notions (validity, entail-


ment, satisfiabilty), we now define corresponding proof-theoretic notions. These
are not defined by appeal to satisfaction of sentences in structures, but by ap-
peal to the existence of certain closed tableaux. It was an important discovery
that these notions coincide. That they do is the content of the soundness and
completeness theorems.
Definition 17.10 (Theorems). A sentence ϕ is a theorem if there is a closed
tableau for F ϕ. We write ` ϕ if ϕ is a theorem and 0 ϕ if it is not.
Definition 17.11 (Derivability). A sentence ϕ is derivable from a set of sen-
tences Γ, Γ ` ϕ, iff there is a finite set {ψ1 , . . . , ψn } ⊆ Γ and a closed tableau
for the set
{F ϕ, Tψ1 , . . . , Tψn , }
If ϕ is not derivable from Γ we write Γ 0 ϕ.
Definition 17.12 (Consistency). A set of sentences Γ is inconsistent iff there is
a finite set {ψ1 , . . . , ψn } ⊆ Γ and a closed tableau for the set
{Tψ1 , . . . , Tψn , }.
If Γ is not inconsistent, we say it is consistent.
Proposition 17.13 (Reflexivity). If ϕ ∈ Γ, then Γ ` ϕ.

Proof. If ϕ ∈ Γ, { ϕ} is a finite subset of Γ and the tableau

1. Fϕ Assumption
2. Tϕ Assumption

Release: (None) ((None)) 231


CHAPTER 17. TABLEAUX

is closed.

Proposition 17.14 (Monotony). If Γ ⊆ ∆ and Γ ` ϕ, then ∆ ` ϕ.

Proof. Any finite subset of Γ is also a finite subset of ∆.

Proposition 17.15 (Transitivity). If Γ ` ϕ and { ϕ} ∪ ∆ ` ψ, then Γ ∪ ∆ ` ψ.

Proof. If { ϕ} ∪ ∆ ` ψ, then there is a finite subset ∆ 0 = {χ1 , . . . , χn } ⊆ ∆ such


that

{F ψ,T ϕ, Tχ1 , . . . , Tχn }

has a closed tableau. If Γ ` ϕ then there are θ1 , . . . , θm such that

{F ϕ,Tθ1 , . . . , Tθm }

has a closed tableau.


Now consider the tableau with assumptions

F ψ, Tχ1 , . . . , Tχn , Tθ1 , . . . , Tθm .

Apply the Cut rule on ϕ. This generates two branches, one has T ϕ in it, the
other F ϕ. Thus, on the one branch, all of

{F ψ, T ϕ, Tχ1 , . . . , Tχn }

are available. Since there is a closed tableau for these assumptions, we can
attach it to that branch; every branch through T ϕ1 closes. On the other branch,
all of
{F ϕ, Tθ1 , . . . , Tθm }
are available, so we can also complete the other side to obtain a closed tableau.
This shows Γ ∪ ∆ ` ψ.

Note that this means that in particular if Γ ` ϕ and ϕ ` ψ, then Γ ` ψ. It


follows also that if ϕ1 , . . . , ϕn ` ψ and Γ ` ϕi for each i, then Γ ` ψ.

Proposition 17.16. Γ is inconsistent iff Γ ` ϕ for every sentence ϕ.

Proof. Exercise.

Proposition 17.17 (Compactness). 1. If Γ ` ϕ then there is a finite subset


Γ0 ⊆ Γ such that Γ0 ` ϕ.

2. If every finite subset of Γ is consistent, then Γ is consistent.

232 Release: (None) ((None))


17.8. DERIVABILITY AND CONSISTENCY

Proof. 1. If Γ ` ϕ, then there is a finite subset Γ0 = {ψ1 , . . . , ψn } and a


closed tableau for
F ϕ, Tψ1 , · · · Tψn
This tableau also shows Γ0 ` ϕ.
2. If Γ is inconsistent, then for some finite subset Γ0 = {ψ1 , . . . , ψn } there is
a closed tableau for
Tψ1 , · · · Tψn
This closed tableau shows that Γ0 is inconsistent.

17.8 Derivability and Consistency


We will now establish a number of properties of the derivability relation. They
are independently interesting, but each will play a role in the proof of the
completeness theorem.
Proposition 17.18. If Γ ` ϕ and Γ ∪ { ϕ} is inconsistent, then Γ is inconsistent.

Proof. There are finite Γ0 = {ψ1 , . . . , ψn } and Γ1 = {χ1 , . . . , χn } ⊆ Γ such that

{F ϕ,Tψ1 , . . . , Tψn }
{T ¬ ϕ,Tχ1 , . . . , Tχm }
have closed tableaux. Using the Cut rule on ϕ we can combine these into a
single closed tableau that shows Γ0 ∪ Γ1 is inconsistent. Since Γ0 ⊆ Γ and
Γ1 ⊆ Γ, Γ0 ∪ Γ1 ⊆ Γ, hence Γ is inconsistent.

Proposition 17.19. Γ ` ϕ iff Γ ∪ {¬ ϕ} is inconsistent.

Proof. First suppose Γ ` ϕ, i.e., there is a closed tableau for

{F ϕ, Tψ1 , . . . , Tψn }
Using the ¬T rule, this can be turned into a closed tableau for

{T ¬ ϕ, Tψ1 , . . . , Tψn }.
On the other hand, if there is a closed tableau for the latter, we can turn it
into a closed tableau of the former by removing every formula that results
from ¬T applied to the first assumption T ¬ ϕ as well as that assumption,
and adding the assumption F ϕ. For if a branch was closed before because
it contained the conclusion of ¬T applied to T ¬ ϕ, i.e., F ϕ, the corresponding
branch in the new tableau is also closed. If a branch in the old tableau was
closed because it contained the assumption T ¬ ϕ as well as F ¬ ϕ we can turn
it into a closed branch by applying ¬F to F ¬ ϕ to obtain T ϕ. This closes the
branch since we added F ϕ as an assumption.

Release: (None) ((None)) 233


CHAPTER 17. TABLEAUX

Proposition 17.20. If Γ ` ϕ and ¬ ϕ ∈ Γ, then Γ is inconsistent.

Proof. Suppose Γ ` ϕ and ¬ ϕ ∈ Γ. Then there are ψ1 , . . . , ψn ∈ Γ such that


{F ϕ, Tψ1 , . . . , Tψn }
has a closed tableau. Replace the assumption F ϕ by T ¬ ϕ, and insert the
conclusion of ¬T applied to F ϕ after the assumptions. Any sentence in the
tableau justified by appeal to line 1 in the old tableau is now justified by appeal
to line n + 1. So if the old tableau was closed, the new one is. It shows that Γ
is inconsistent, since all assumptions are in Γ.

Proposition 17.21. If Γ ∪ { ϕ} and Γ ∪ {¬ ϕ} are both inconsistent, then Γ is in-


consistent.

Proof. If there are ψ1 , . . . , ψn ∈ Γ and χ1 , . . . , χm ∈ Γ such that


{T ϕ,Tψ1 , . . . , Tψn }
{T ¬ ϕ,Tχ1 , . . . , Tχm }
both have closed tableaux, we can construct a tableau that shows that Γ is
inconsistent by using as assumptions Tψ1 , . . . , Tψn together with Tχ1 , . . . ,
Tχm , followed by an application of the Cut rule, yielding two branches, one
starting with T ϕ, the other with F ϕ. Add on the part below the assumptions
of the first tableau on the left side. Here, every rule application is still correct,
and every branch closes. On the right side, add the part below the assump-
tions of the seond tableau, with the results of any applications of ¬T to T ¬ ϕ
removed.
For if a branch was closed before because it contained the conclusion of
¬T applied to T ¬ ϕ, i.e., F ϕ, as well as F ϕ, the corresponding branch in the
new tableau is also closed. If a branch in the old tableau was closed because
it contained the assumption T ¬ ϕ as well as F ¬ ϕ we can turn it into a closed
branch by applying ¬F to F ¬ ϕ to obtain T ϕ.

17.9 Derivability and the Propositional Connectives


Proposition 17.22. 1. Both ϕ ∧ ψ ` ϕ and ϕ ∧ ψ ` ψ.
2. ϕ, ψ ` ϕ ∧ ψ.

Proof. 1. Both {F ϕ, T ϕ ∧ ψ} and {F ψ, T ϕ ∧ ψ} have closed tableaux

1. Fϕ Assumption
2. Tϕ ∧ ψ Assumption
3. Tϕ ∧T 2
4. Tψ ∧T 2

234 Release: (None) ((None))


17.9. DERIVABILITY AND THE PROPOSITIONAL CONNECTIVES

1. Fψ Assumption
2. Tϕ ∧ ψ Assumption
3. Tϕ ∧T 2
4. Tψ ∧T 2

2. Here is a closed tableau for {T ϕ, Tψ, F ϕ ∧ ψ}:

1. Fϕ ∧ ψ Assumption
2. Tϕ Assumption
3. Tψ Assumption

4. Fϕ Fψ ∧F 1
⊗ ⊗

Proposition 17.23. 1. ϕ ∨ ψ, ¬ ϕ, ¬ψ is inconsistent.

2. Both ϕ ` ϕ ∨ ψ and ψ ` ϕ ∨ ψ.

Proof. 1. We give a closed tableau of {T ϕ ∨ ψ, T ¬ ϕ, T ¬ψ}:

1. Tϕ ∨ ψ Assumption
2. T¬ ϕ Assumption
3. T ¬ψ Assumption
4. Fϕ ¬T 2
5. Fψ ¬T 3

6. Tϕ Tψ ∨T 1
⊗ ⊗

2. Both {F ϕ ∨ ψ, T ϕ} and {F ϕ ∨ ψ, Tψ} have closed tableaux:

1. Fϕ ∨ ψ Assumption
2. Tϕ Assumption
3. Fϕ ∨F 1
4. Fψ ∨F 1

Release: (None) ((None)) 235


CHAPTER 17. TABLEAUX

1. Fϕ ∨ ψ Assumption
2. Tψ Assumption
3. Fϕ ∨F 1
4. Fψ ∨F 1

Proposition 17.24. 1. ϕ, ϕ → ψ ` ψ.

2. Both ¬ ϕ ` ϕ → ψ and ψ ` ϕ → ψ.

Proof. 1. {F ψ, T ϕ → ψ, T ϕ} has a closed tableau:

1. Fψ Assumption
2. Tϕ → ψ Assumption
3. Tϕ Assumption

4. Fϕ Tψ →T 2
⊗ ⊗

2. Both s{F ϕ → ψ, T ¬ ϕ} and {F ϕ → ψ, T ¬ψ} have closed tableaux:

1. Fϕ → ψ Assumption
2. T¬ ϕ Assumption
3. Tϕ →F 1
4. Fψ →F 1
5. Fϕ ¬T 2

1. Fϕ → ψ Assumption
2. T ¬ψ Assumption
3. Tϕ →F 1
4. Fψ →F 1
5. Fψ ¬T 2

236 Release: (None) ((None))


17.10. DERIVABILITY AND THE QUANTIFIERS

17.10 Derivability and the Quantifiers


Theorem 17.25. If c is a constant not occurring in Γ or ϕ( x ) and Γ ` ϕ(c), then
Γ ` ∀ x ϕ ( x ).

Proof. Suppose Γ ` ϕ(c), i.e., there are ψ1 , . . . , ψn ∈ Γ and a closed tableau for
{F ϕ(c),Tψ1 , . . . , Tψn }.

We have to show that there is also a closed tableau for

{F ∀ x ϕ( x ),Tψ1 , . . . , Tψn }.
Take the closed tableau and replace the first assumption with F ∀ x ϕ( x ), and
insert F ϕ(c) after the assumptions.

F ϕ(c) F ∀ x ϕ( x )
Tψ.. 1 Tψ.. 1
. .
Tψn Tψn
F ϕ(c)

The tableau is still closed, since all sentences available as assumptions before
are still available at the top of the tableau. The inserted line is the result of
a correct application of ∀F, since the constant symbol c does not occur in ψ1 ,
. . . , ψn of ∀ x ϕ( x ), i.e., it does not occur above the inserted line in the new
tableau.

Proposition 17.26. 1. ϕ(t) ` ∃ x ϕ( x ).


2. ∀ x ϕ( x ) ` ϕ(t).

Proof. 1. A closed tableau for F ∃ x ϕ( x ), T ϕ(t) is:

1. F ∃ x ϕ( x ) Assumption
2. T ϕ(t) Assumption
3. F ϕ(t) ∃F 1

2. A closed tableau for F ϕ(t), T ∀ x ϕ( x ), is:

1. F ϕ(t) Assumption
2. T ∀ x ϕ( x ) Assumption
3. T ϕ(t) ∀T 2

Release: (None) ((None)) 237


CHAPTER 17. TABLEAUX

17.11 Soundness
A derivation system, such as tableaux, is sound if it cannot derive things that
do not actually hold. Soundness is thus a kind of guaranteed safety property
for derivation systems. Depending on which proof theoretic property is in
question, we would like to know for instance, that

1. every derivable ϕ is valid;

2. if a sentence is derivable from some others, it is also a consequence of


them;

3. if a set of sentences is inconsistent, it is unsatisfiable.

These are important properties of a derivation system. If any of them do not


hold, the derivation system is deficient—it would derive too much. Conse-
quently, establishing the soundness of a derivation system is of the utmost
importance.
Because all these proof-theoretic properties are defined via closed tableaux
of some kind or other, proving (1)–(3) above requires proving something about
the semantic properties of closed tableaux. We will first define what it means
for a signed formula to be satisfied in a structure, and then show that if a
tableau is closed, no structure satisfies all its assumptions. (1)–(3) then follow
as corollaries from this result.

Definition 17.27. A structure M satisfies a signed formula T ϕ iff M  ϕ, and


it satisfies F ϕ iff M 2 ϕ. M satisfies a set of signed formulas Γ iff it satis-
fies every S ϕ ∈ Γ. Γ is satisfiable if there is a structure that satisfies it, and
unsatisfiable otherwise.

Theorem 17.28 (Soundness). If Γ has a closed tableau, Γ is unsatisfiable.

Proof. Let’s call a branch of a tableau satisfiable iff the set of signed formulas
on it is satisfiable, and let’s call a tableau satisfiable if it contains at least one
satisfiable branch.
We show the following: Extending a satisfiable tableau by one of the rules
of inference always results in a satisfiable tableau. This will prove the theo-
rem: any closed tableau results by applying rules of inference to the tableau
consisting only of assumptions from Γ. So if Γ were satisfiable, any tableau
for it would be satisfiable. A closed tableau, however, is clearly not satisfiable:
every branch contains both T ϕ and F ϕ, and no structure can both satisfy and
not satisfy ϕ.
Suppose we have a satisfiable tableau, i.e., a tableau with at least one sat-
isfiable branch. Applying a rule of inference either adds signed formulas to a

238 Release: (None) ((None))


17.11. SOUNDNESS

branch, or splits a branch in two. If the tableau has a satisfiable branch which
is not extended by the rule application in question, it remains a satisfiable
branch in the extended tableau, so the extended tableau is satisfiable. So we
only have to consider the case where a rule is applied to a satisfiable branch.
Let Γ be the set of signed formulas on that branch, and let S ϕ ∈ Γ be the
signed formula to which the rule is applied. If the rule does not result in a
split branch, we have to show that the extended branch, i.e., Γ together with
the conclusions of the rule, is still satisfiable. If the rule results in split branch,
we have to show that at least one of the two resulting branches is satisfiable.
First, we consider the possible inferences with only one premise.
1. The branch is expanded by applying ¬T to T ¬ψ ∈ Γ. Then the extended
branch contains the signed formulas Γ ∪ {F ψ}. Suppose M  Γ. In
particular, M  ¬ψ. Thus, M 2 ψ, i.e., M satisfies F ψ.
2. The branch is expanded by applying ¬F to F ¬ψ ∈ Γ: Exercise.
3. The branch is expanded by applying ∧T to Tψ ∧ χ ∈ Γ, which results in
two new signed formulas on the branch: Tψ and Tχ. Suppose M  Γ,
in particular M  ψ ∧ χ. Then M  ψ and M  χ. This means that M
satisfies both Tψ and Tχ.
4. The branch is expanded by applying ∨F to Tψ ∨ χ ∈ Γ: Exercise.
5. The branch is expanded by applying →F to Tψ → χ ∈ Γ: This results in
two new signed formulas on the branch: Tψ and F χ. Suppose M  Γ,
in particular M 2 ψ → χ. Then M  ψ and M 2 χ. This means that M
satisfies both Tψ and F χ.
6. The branch is expanded by applying ∀T to T ∀ x ψ( x ) ∈ Γ: This results
in a new signed formula T ϕ(t) on the branch. Suppose M  Γ, in par-
ticular, M  ∀ x ϕ( x ). By ??, M  ϕ(t). Consequently, M satisfies T ϕ(t).
7. The branch is expanded by applying ∀F to F ∀ x ψ( x ) ∈ Γ: This results in
a new signed formula F ϕ( a) where a is a constant symbol not occurring
in Γ. Since Γ is satisfiable, there is a M such that M  Γ, in particular
M 2 ∀ x ψ( x ). We have to show that Γ ∪ {F ϕ( a)} is satisfiable. To do
this, we define a suitable M0 as follows.
By ??, M 2 ∀ x ψ( x ) iff for some s, M, s 2 ψ( x ). Now let M0 be just like
0
M, except aM = s( x ). By ??, for any Tχ ∈ Γ, M0  χ, and for any
0
F χ ∈ Γ, M 2 χ, since a does not occur in Γ.
By ??, M0 , s 2 ϕ( x ). By ??, M0 , s 2 ϕ( a). Since ϕ( a) is a sentence, by ??,
M0 2 ϕ( a), i.e., M0 satisfies F ϕ( a).
8. The branch is expanded by applying ∃T to T ∃ x ψ( x ) ∈ Γ: Exercise.
9. The branch is expanded by applying ∃F to F ∃ x ψ( x ) ∈ Γ: Exercise.

Release: (None) ((None)) 239


CHAPTER 17. TABLEAUX

Now let’s consider the possible inferences with two premises.

1. The branch is expanded by applying ∧F to F ψ ∧ χ ∈ Γ, which results in


two branches, a left one continuing through F ψ and a right one through
F χ. Suppose M  Γ, in particular M 2 ψ ∧ χ. Then M 2 ψ or M 2 χ. In
the former case, M satisfies F ψ, i.e., M satisfies the formulas on the left
branch. In the latter, M satisfies F χ, i.e., M satisfies the formulas on the
right branch.

2. The branch is expanded by applying ∨T to Tψ ∨ χ ∈ Γ: Exercise.

3. The branch is expanded by applying →T to Tψ → χ ∈ Γ: Exercise.

4. The branch is expanded by Cut: This results in two branches, one con-
taining Tψ, the other containing F ψ. Since M  Γ and either M  ψ or
M 2 ψ, M satisfies either the left or the right branch.

Corollary 17.29. If ` ϕ then ϕ is valid.

Corollary 17.30. If Γ ` ϕ then Γ  ϕ.

Proof. If Γ ` ϕ then for some ψ1 , . . . , ψn ∈ Γ, {F ϕ, Tψ1 , . . . , Tψn } has a closed


tableau. By ??, every structure M either makes some ψi false or makes ϕ true.
Hence, if M  Γ then also M  ϕ.

Corollary 17.31. If Γ is satisfiable, then it is consistent.

Proof. We prove the contrapositive. Suppose that Γ is not consistent. Then


there are ψ1 , . . . , ψn ∈ Γ and a closed tableau for {Tψ, . . . , Tψ}. By ??, there is
no M such that M  ψi for all i = 1, . . . , n. But then Γ is not satisfiable.

17.12 Tableaux with Identity predicate


Tableaux with identity predicate require additional inference rules. The rules
for = are (t, t1 , and t2 are closed terms):

Tt1 = t2 Tt1 = t2
= T ϕ ( t1 ) F ϕ ( t1 )
Tt = t
=T =T
T ϕ ( t1 ) F ϕ ( t1 )

Note that in contrast to all the other rules, =T and =F require that two
signed formulas already appear on the branch, namely both Tt1 = t2 and
S ϕ ( t1 ).

240 Release: (None) ((None))


17.13. SOUNDNESS WITH IDENTITY PREDICATE

Example 17.32. If s and t are closed terms, then s = t, ϕ(s) ` ϕ(t):

1. F ϕ(t) Assumption
2. Ts = t Assumption
3. T ϕ(s) Assumption
4. T ϕ(t) =T 2, 3

This may be familiar as the principle of substitutability of identicals, or Leib-


niz’ Law.
Tableaux prove that = is symmetric:

1. Ft = s Assumption
2. Ts = t Assumption
3. Ts = s =
4. Tt = s =T 2, 3

Here, line 2 is the first prerequisite formula Ts = t of =T, and line 3 the
second one, T ϕ(s)—think of ϕ( x ) as x = s, then ϕ(s) is s = s and ϕ(t) is
t = s.
They also prove that = is transitive:

1. F t1 = t3 Assumption
2. Tt1 = t2 Assumption
3. Tt2 = t3 Assumption
4. Tt1 = t3 =T 3, 2

In this tableau, the first prerequisite formula of =T is line 3, Tt2 = t3 . The


second one, T ϕ(t2 ) is line 2. Think of ϕ( x ) as t1 = x; that makes ϕ(t2 ) into
t1 = t2 and ϕ(t3 ) into t1 = t3 .

17.13 Soundness with Identity predicate


Proposition 17.33. Tableaux with rules for identity are sound: no closed tableau is
satisfiable.

Proof. We just have to show as before that if a tableau has a satisfiable branch,
the branch resulting from applying one of the rules for = to it is also satisfi-
able. Let Γ be the set of signed formulas on the branch, and let M be a struc-
ture satisfying Γ.
Suppose the branch is expanded using =, i.e., by adding the signed for-
mula Tt = t. Trivially, M  t = t, so M also satisfies Γ ∪ {Tt = t}.

Release: (None) ((None)) 241


CHAPTER 17. TABLEAUX

If the branch is expanded using =T, we add a signed formula S ϕ(t2 ),


but Γ contains both Tt1 = t2 and T ϕ(t1 ). Thus we have M  t1 = t2 and
M  ϕ(t1 ). Let s be a variable assignment with s( x ) = ValM (t1 ). By ??,
M, s  ϕ(t1 ). Since s ∼ x s, by ??, M, s  ϕ( x ). since M  t1 = t2 , we have
ValM (t1 ) = ValM (t2 ), and hence s( x ) = ValM (t2 ). By applying ?? again,
we also have M, s  ϕ(t2 ). By ??, M  ϕ(t2 ). The case of =F is treated
similarly.

Problems
Problem 17.1. Give closed tableaux of the following:

1. F ¬( ϕ → ψ) → ( ϕ ∧ ¬ψ)

2. F ( ϕ → χ) ∨ (ψ → χ), T ( ϕ ∧ ψ) → χ

Problem 17.2. Give closed tableaux of the following:

1. F ∃y ϕ(y) → ψ, T ∀ x ( ϕ( x ) → ψ)

2. F ∃ x ( ϕ( x ) → ∀y ϕ(y))

Problem 17.3. Prove ??

Problem 17.4. Prove that Γ ` ¬ ϕ iff Γ ∪ { ϕ} is inconsistent.

Problem 17.5. Complete the proof of ??.

Problem 17.6. Give closed tableaux for the following:

1. F ∀ x ∀y (( x = y ∧ ϕ( x )) → ϕ(y))

2. F ∃ x ( ϕ( x ) ∧ ∀y ( ϕ(y) → y = x )),
T ∃ x ϕ( x ) ∧ ∀y ∀z (( ϕ(y) ∧ ϕ(z)) → y = z)

242 Release: (None) ((None))


Chapter 18

Axiomatic Derivations

No effort has been made yet to ensure that the material in this chap-
ter respects various tags indicating which connectives and quantifiers are
primitive or defined: all are assumed to be primitive. If the FOL tag is
true, we produce a version with quantifiers, otherwise without.

18.1 Rules and Derivations


Axiomatic derivations are perhaps the simplest proof system for logic. A
derivation is just a sequence of formulas. To count as a derivation, every for-
mula in the sequence must either be an instance of an axiom, or must follow
from one or more formulas that precede it in the sequence by a rule of infer-
ence. A derivation derives its last formula.
Definition 18.1 (Derivability). If Γ is a set of formulas of L then a derivation
from Γ is a finite sequence ϕ1 , . . . , ϕn of formulas where for each i ≤ n one of
the following holds:
1. ϕi ∈ Γ; or
2. ϕi is an axiom; or
3. ϕi follows from some ϕ j (and ϕk ) with j < i (and k < i) by a rule of
inference.
What counts as a correct derivation depends on which inference rules we
allow (and of course what we take to be axioms). And an inference rule is an
if-then statement that tells us that, under certain conditions, a step Ai in is a
correct inference step.
Definition 18.2 (Rule of inference). A rule of inference gives a sufficient condi-
tion for what counts as a correct inference step in a derivation from Γ.

243
CHAPTER 18. AXIOMATIC DERIVATIONS

For instance, since any one-element sequence ϕ with ϕ ∈ Γ trivially counts


as a derivation, the following might be a very simple rule of inference:

If ϕ ∈ Γ, then ϕ is always a correct inference step in any derivation


from Γ.

Similarly, if ϕ is one of the axioms, then ϕ by itself is a derivation, and so this


is also a rule of inference:

If ϕ is an axiom, then ϕ is a correct inference step.

It gets more interesting if the rule of inference appeals to formulas that appear
before the step considered. The following rule is called modus ponens:

If ψ → ϕ and ψ occur higher up in the derivation, then ϕ is a correct


inference step.

If this is the only rule of inference, then our definition of derivation above
amounts to this: ϕ1 , . . . , ϕn is a derivation iff for each i ≤ n one of the follow-
ing holds:

1. ϕi ∈ Γ; or

2. ϕi is an axiom; or

3. for some j < i, ϕ j is ψ → ϕi , and for some k < i, ϕk is ψ.

The last clause says that ϕi follows from ϕ j (ψ) and ϕk (ψ → ϕi ) by modus
ponens. If we can go from 1 to n, and each time we find a formula ϕi that is
either in Γ, an axiom, or which a rule of inference tells us that it is a correct
inference step, then the entire sequence counts as a correct derivation.

Definition 18.3 (Derivability). A formula ϕ is derivable from Γ, written Γ ` ϕ,


if there is a derivation from Γ ending in ϕ.

Definition 18.4 (Theorems). A formula ϕ is a theorem if there is a derivation


of ϕ from the empty set. We write ` ϕ if ϕ is a theorem and 0 ϕ if it is not.

244 Release: (None) ((None))


18.2. AXIOM AND RULES FOR THE PROPOSITIONAL CONNECTIVES

18.2 Axiom and Rules for the Propositional Connectives


Definition 18.5 (Axioms). The set of Ax0 of axioms for the propositional con-
nectives comprises all formulas of the following forms:

( ϕ ∧ ψ) → ϕ (18.1)
( ϕ ∧ ψ) → ψ (18.2)
ϕ → (ψ → ( ϕ ∧ ψ)) (18.3)
ϕ → ( ϕ ∨ ψ) (18.4)
ϕ → (ψ ∨ ϕ) (18.5)
( ϕ → χ) → ((ψ → χ) → (( ϕ ∨ ψ) → χ)) (18.6)
ϕ → (ψ → ϕ) (18.7)
( ϕ → (ψ → χ)) → (( ϕ → ψ) → ( ϕ → χ)) (18.8)
( ϕ → ψ) → (( ϕ → ¬ψ) → ¬ ϕ) (18.9)
¬ ϕ → ( ϕ → ψ) (18.10)
> (18.11)
⊥→ϕ (18.12)
( ϕ → ⊥) → ¬ ϕ (18.13)
¬¬ ϕ → ϕ (18.14)

Definition 18.6 (Modus ponens). If ψ and ψ → ϕ already occur in a derivation,


then ϕ is a correct inference step.

We’ll abbreviate the rule modus ponens as “MP.”

18.3 Axioms and Rules for Quantifiers


Definition 18.7 (Axioms for quantifiers). The axioms governing quantifiers are
all instances of the following:

∀ x ψ → ψ ( t ), (18.15)
ψ(t) → ∃ x ψ. (18.16)

for any ground term t.

Definition 18.8 (Rules for quantifiers).


If ψ → ϕ( a) already occurs in the derivation and a does not occur in Γ or ψ,
then ψ → ∀ x ϕ( x ) is a correct inference step.
If ϕ( a) → ψ already occurs in the derivation and a does not occur in Γ or ψ,
then ∃ x ϕ( x ) → ψ is a correct inference step.

We’ll abbreviate either of these by “QR.”

Release: (None) ((None)) 245


CHAPTER 18. AXIOMATIC DERIVATIONS

18.4 Examples of Derivations


Example 18.9. Suppose we want to prove (¬θ ∨ α) → (θ → α). Clearly, this is
not an instance of any of our axioms, so we have to use the MP rule to derive
it. Our only rule is MP, which given ϕ and ϕ → ψ allows us to justify ψ. One
strategy would be to use ?? with ϕ being ¬θ, ψ being α, and χ being θ → α, i.e.,
the instance

(¬θ → (θ → α)) → ((α → (θ → α)) → ((¬θ ∨ α) → (θ → α))).

Why? Two applications of MP yield the last part, which is what we want.
And we easily see that ¬θ → (θ → α) is an instance of ??, and α → (θ → α) is
an instance of ??. So our derivation is:

1. ¬θ → (θ → α) ??
2. (¬θ → (θ → α)) →
((α → (θ → α)) → ((¬θ ∨ α) → (θ → α))) ??
3. ((α → (θ → α)) → ((¬θ ∨ α) → (θ → α)) 1, 2, MP
4. α → (θ → α) ??
5. (¬θ ∨ α) → (θ → α) 3, 4, MP

Example 18.10. Let’s try to find a derivation of θ → θ. It is not an instance of


an axiom, so we have to use MP to derive it. ?? is an axiom of the form ϕ → ψ
to which we could apply MP. To be useful, of course, the ψ which MP would
justify as a correct step in this case would have to be θ → θ, since this is what
we want to derive. That means ϕ would also have to be θ, i.e., we might look
at this instance of ??:
θ → (θ → θ )

In order to apply MP, we would also need to justify the corresponding second
premise, namely ϕ. But in our case, that would be θ, and we won’t be able to
derive θ by itself. So we need a different strategy.
The other axiom involving just → is ??, i.e.,

( ϕ → (ψ → χ)) → (( ϕ → ψ) → ( ϕ → χ))

We could get to the last nested conditional by applying MP twice. Again,


that would mean that we want an instance of ?? where ϕ → χ is θ → θ, the
formula we are aiming for. Then of course, ϕ and χ are both θ. How should
we pick ψ so that both ϕ → (ψ → χ) and ϕ → ψ, i.e., in our case θ → (ψ → θ )
and θ → ψ, are also derivable? Well, the first of these is already an instance of
??, whatever we decide ψ to be. And θ → ψ would be another instance of ?? if
ψ were (θ → θ ). So, our derivation is:

246 Release: (None) ((None))


18.5. DERIVATIONS WITH QUANTIFIERS

1. θ → ((θ → θ ) → θ ) ??
2. (θ → ((θ → θ ) → θ )) →
((θ → (θ → θ )) → (θ → θ )) ??
3. (θ → (θ → θ )) → (θ → θ ) 1, 2, MP
4. θ → (θ → θ ) ??
5. θ→θ 3, 4, MP

Example 18.11. Sometimes we want to show that there is a derivation of some


formula from some other formulas Γ. For instance, let’s show that we can
derive ϕ → χ from Γ = { ϕ → ψ, ψ → χ}.

1. ϕ→ψ H YP
2. ψ→χ H YP
3. (ψ → χ) → ( ϕ → (ψ → χ)) ??
4. ϕ → (ψ → χ) 2, 3, MP
5. ( ϕ → (ψ → χ)) →
(( ϕ → ψ) → ( ϕ → χ)) ??
6. (( ϕ → ψ) → ( ϕ → χ)) 4, 5, MP
7. ϕ→χ 1, 6, MP

The lines labelled “H YP” (for “hypothesis”) indicate that the formula on that
line is an element of Γ.

Proposition 18.12. If Γ ` ϕ → ψ and Γ ` ψ → χ, then Γ ` ϕ → χ

Proof. Suppose Γ ` ϕ → ψ and Γ ` ψ → χ. Then there is a derivation of ϕ → ψ


from Γ; and a derivation of ψ → χ from Γ as well. Combine these into a single
derivation by concatenating them. Now add lines 3–7 of the derivation in the
preceding example. This is a derivation of ϕ → χ—which is the last line of the
new derivation—from Γ. Note that the justifications of lines 4 and 7 remain
valid if the reference to line number 2 is replaced by reference to the last line
of the derivation of ϕ → ψ, and reference to line number 1 by reference to the
last line of the derivation of B → χ.

18.5 Derivations with Quantifiers


Example 18.13. Let us give a derivation of (∀ x ϕ( x ) ∧ ∀y ψ(y)) → ∀ x ( ϕ( x ) ∧
ψ( x )).
First, note that

(∀ x ϕ( x ) ∧ ∀y ψ(y)) → ∀ x ϕ( x )

is an instance of ??, and

∀ x ϕ( x ) → ϕ( a)

Release: (None) ((None)) 247


CHAPTER 18. AXIOMATIC DERIVATIONS

of ??. So, by ??, we know that

(∀ x ϕ( x ) ∧ ∀y ψ(y)) → ϕ( a)

is derivable. Likewise, since

(∀ x ϕ( x ) ∧ ∀y ψ(y)) → ∀y ψ(y) and


∀y ψ(y) → ψ( a)

are instances of ?? and ??, respectively,

(∀ x ϕ( x ) ∧ ∀y ψ(y)) → ψ( a)

is derivable by ??. Using an appropriate instance of ?? and two applications


of MP, we see that

(∀ x ϕ( x ) ∧ ∀y ψ(y)) → ( ϕ( a) ∧ ψ( a))

is derivable. We can now apply QR to obtain

(∀ x ϕ( x ) ∧ ∀y ψ(y)) → ∀ x ( ϕ( x ) ∧ ψ( x )).

18.6 Proof-Theoretic Notions


Just as we’ve defined a number of important semantic notions (validity, entail-
ment, satisfiabilty), we now define corresponding proof-theoretic notions. These
are not defined by appeal to satisfaction of sentences in structures, but by ap-
peal to the derivability or non-derivability of certain formulas. It was an im-
portant discovery that these notions coincide. That they do is the content of
the soundness and completeness theorems.

Definition 18.14 (Derivability). A formula ϕ is derivable from Γ, written Γ ` ϕ,


if there is a derivation from Γ ending in ϕ.

Definition 18.15 (Theorems). A formula ϕ is a theorem if there is a derivation


of ϕ from the empty set. We write ` ϕ if ϕ is a theorem and 0 ϕ if it is not.

Definition 18.16 (Consistency). A set Γ of formulas is consistent if and only if


Γ 0 ⊥; it is inconsistent otherwise.

Proposition 18.17 (Reflexivity). If ϕ ∈ Γ, then Γ ` ϕ.

Proof. The formula ϕ by itself is a derivation of ϕ from Γ.

Proposition 18.18 (Monotony). If Γ ⊆ ∆ and Γ ` ϕ, then ∆ ` ϕ.

248 Release: (None) ((None))


18.7. THE DEDUCTION THEOREM

Proof. Any derivation of ϕ from Γ is also a derivation of ϕ from ∆.

Proposition 18.19 (Transitivity). If Γ ` ϕ and { ϕ} ∪ ∆ ` ψ, then Γ ∪ ∆ ` ψ.

Proof. Suppose { ϕ} ∪ ∆ ` ψ. Then there is a derivation ψ1 , . . . , ψl = ψ


from { ϕ} ∪ ∆. Some of the steps in that derivation will be correct because
of a rule which refers to a prior line ψi = ϕ. By hypothesis, there is a deriva-
tion of ϕ from Γ, i.e., a derivation ϕ1 , . . . , ϕk = ϕ where every ϕi is an axiom,
an element of Γ, or correct by a rule of inference. Now consider the sequence

ϕ1 , . . . , ϕk = ϕ, ψ1 , . . . , ψl = ψ.

This is a correct derivation of ψ from Γ ∪ ∆ since every Bi = ϕ is now justified


by the same rule which justifies ϕk = ϕ.

Note that this means that in particular if Γ ` ϕ and ϕ ` ψ, then Γ ` ψ. It


follows also that if ϕ1 , . . . , ϕn ` ψ and Γ ` ϕi for each i, then Γ ` ψ.

Proposition 18.20. Γ is inconsistent iff Γ ` ϕ for every ϕ.

Proof. Exercise.

Proposition 18.21 (Compactness). 1. If Γ ` ϕ then there is a finite subset


Γ0 ⊆ Γ such that Γ0 ` ϕ.

2. If every finite subset of Γ is consistent, then Γ is consistent.

Proof. 1. If Γ ` ϕ, then there is a finite sequence of formulas ϕ1 , . . . , ϕn so


that ϕ ≡ ϕn and each ϕi is either a logical axiom, an element of Γ or
follows from previous formulas by modus ponens. Take Γ0 to be those
ϕi which are in Γ. Then the derivation is likewise a derivation from Γ0 ,
and so Γ0 ` ϕ.

2. This is the contrapositive of (1) for the special case ϕ ≡ ⊥.

18.7 The Deduction Theorem


As we’ve seen, giving derivations in an axiomatic system is cumbersome, and
derivations may be hard to find. Rather than actually write out long lists of
formulas, it is generally easier to argue that such derivations exist, by making
use of a few simple results. We’ve already established three such results: ??
says we can always assert that Γ ` ϕ when we know that ϕ ∈ Γ. ?? says that
if Γ ` ϕ then also Γ ∪ {ψ} ` ϕ. And ?? implies that if Γ ` ϕ and ϕ ` ψ, then
Γ ` ψ. Here’s another simple result, a “meta”-version of modus ponens:

Proposition 18.22. If Γ ` ϕ and Γ ` ϕ → ψ, then Γ ` ψ.

Release: (None) ((None)) 249


CHAPTER 18. AXIOMATIC DERIVATIONS

Proof. We have that { ϕ, ϕ → ψ} ` ψ:


1. ϕ Hyp.
2. ϕ→ψ Hyp.
3. ψ 1, 2, MP
By ??, Γ ` ψ.

The most important result we’ll use in this context is the deduction theo-
rem:

Theorem 18.23 (Deduction Theorem). Γ ∪ { ϕ} ` ψ if and only if Γ ` ϕ → ψ.

Proof. The “if” direction is immediate. If Γ ` ϕ → ψ then also Γ ∪ { ϕ} ` ϕ → ψ


by ??. Also, Γ ∪ { ϕ} ` ϕ by ??. So, by ??, Γ ∪ { ϕ} ` ψ.
For the “only if” direction, we proceed by induction on the length of the
derivation of ψ from Γ ∪ { ϕ}.
For the induction basis, we prove the claim for every derivation of length 1.
A derivation of ψ from Γ ∪ { ϕ} of length 1 consists of ψ by itself; and if it is
correct ψ is either ∈ Γ ∪ { ϕ} or is an axiom. If ψ ∈ Γ or is an axiom, then
Γ ` ψ. We also have that Γ ` ψ → ( ϕ → ψ) by ??, and ?? gives Γ ` ϕ → ψ.
If ψ ∈ { ϕ} then Γ ` ϕ → ψ because then last sentence ϕ → ψ is the same as
ϕ → ϕ, and we have derived that in ??.
For the inductive step, suppose a derivation of ψ from Γ ∪ { ϕ} ends with
a step ψ which is justified by modus ponens. (If it is not justified by modus
ponens, ψ ∈ Γ, ψ ≡ ϕ, or ψ is an axiom, and the same reasoning as in the
induction basis applies.) Then some previous steps in the derivation are χ → ψ
and χ, for some formula χ, i.e., Γ ∪ { ϕ} ` χ → ψ and Γ ∪ { ϕ} ` χ, and the
respective derivations are shorter, so the inductive hypothesis applies to them.
We thus have both:

Γ ` ϕ → ( χ → ψ );
Γ ` ϕ → χ.

But also
Γ ` ( ϕ → (χ → ψ)) → (( ϕ → χ) → ( ϕ → ψ)),
by ??, and two applications of ?? give Γ ` ϕ → ψ, as required.

Notice how ?? and ?? were chosen precisely so that the Deduction Theorem
would hold.
The following are some useful facts about derivability, which we leave as
exercises.

Proposition 18.24. 1. ` ( ϕ → ψ) → ((ψ → χ) → ( ϕ → χ);

2. If Γ ∪ {¬ ϕ} ` ¬ψ then Γ ∪ {ψ} ` ϕ (Contraposition);

250 Release: (None) ((None))


18.8. THE DEDUCTION THEOREM WITH QUANTIFIERS

3. { ϕ, ¬ ϕ} ` ψ (Ex Falso Quodlibet, Explosion);


4. {¬¬ ϕ} ` ϕ (Double Negation Elimination);
5. If Γ ` ¬¬ ϕ then Γ ` ϕ;

18.8 The Deduction Theorem with Quantifiers


Theorem 18.25 (Deduction Theorem). If Γ ∪ { ϕ} ` ψ and then Γ ` ϕ → ψ.
Proof. We again proceed by induction on the length of the derivation of ψ from
Γ ∪ { ϕ }.
The proof of the induction basis is identical to that in the proof of ??.
For the inductive step, suppose again that the derivation of ψ from Γ ∪ { ϕ}
ends with a step ψ which is justified by an inference rule. If the inference rule
is modus ponens, we proceed as in the proof of ??. If the inference rule is QR,
we know that ψ ≡ χ → ∀ x θ ( x ) and a formula of the form χ → θ ( a) appears
earlier in the derivation, where a does not occur in χ, ϕ, or Γ. We thus have
that
Γ ∪ { ϕ} ` χ → θ ( a)

and the induction hypothesis applies, i.e., we have that

Γ ` ϕ → θ ( a)

By

` ( ϕ → (χ → θ ( a))) → (( ϕ ∧ χ) → θ ( a))

and modus ponens we get

Γ ` ( ϕ ∧ χ ) → θ ( a ).

Since the eigenvariale condition still applies, we can add a step to this deriva-
tion justified by QR, and get:

Γ ` ( ϕ ∧ χ) → ∀ x θ ( x )

We also have

` (( ϕ ∧ χ) → ∀ x θ ( x )) → ( ϕ → (χ → ∀ x θ ( x ))

so by modus ponens,

Γ ` ϕ → (χ → ∀ x θ ( x ))
i.e., Γ ` ψ.
We leave the case where ψ is justified by the rule QR, but is of the form
∃ x θ ( x ) → χ, as an exercise.

Release: (None) ((None)) 251


CHAPTER 18. AXIOMATIC DERIVATIONS

18.9 Derivability and Consistency


We will now establish a number of properties of the derivability relation. They
are independently interesting, but each will play a role in the proof of the
completeness theorem.

Proposition 18.26. If Γ ` ϕ and Γ ∪ { ϕ} is inconsistent, then Γ is inconsistent.

Proof. If Γ ∪ { ϕ} is inconsistent, then Γ ∪ { ϕ} ` ⊥. By ??, Γ ` ψ for every


ψ ∈ Γ. Since also Γ ` ϕ by hypothesis, Γ ` ψ for every ψ ∈ Γ ∪ { ϕ}. By ??,
Γ ` ⊥, i.e., Γ is inconsistent.

Proposition 18.27. Γ ` ϕ iff Γ ∪ {¬ ϕ} is inconsistent.

Proof. First suppose Γ ` ϕ. Then Γ ∪ {¬ ϕ} ` ϕ by ??. Γ ∪ {¬ ϕ} ` ¬ ϕ by ??.


We also have ` ¬ ϕ → ( ϕ → ⊥) by ??. So by two applications of ??, we have
Γ ∪ {¬ ϕ} ` ⊥.
Now assume Γ ∪ {¬ ϕ} is inconsistent, i.e., Γ ∪ {¬ ϕ} ` ⊥. By the deduc-
tion theorem, Γ ` ¬ ϕ → ⊥. Γ ` (¬ ϕ → ⊥) → ¬¬ ϕ by ??, so Γ ` ¬¬ ϕ by ??.
Since Γ ` ¬¬ ϕ → ϕ (??), we have Γ ` ϕ by ?? again.

Proposition 18.28. If Γ ` ϕ and ¬ ϕ ∈ Γ, then Γ is inconsistent.

Proof. Γ ` ¬ ϕ → ( ϕ → ⊥) by ??. Γ ` ⊥ by two applications of ??.

Proposition 18.29. If Γ ∪ { ϕ} and Γ ∪ {¬ ϕ} are both inconsistent, then Γ is in-


consistent.

Proof. Exercise.

18.10 Derivability and the Propositional Connectives


Proposition 18.30. 1. Both ϕ ∧ ψ ` ϕ and ϕ ∧ ψ ` ψ

2. ϕ, ψ ` ϕ ∧ ψ.

Proof. 1. From ?? and ?? by modus ponens.

2. From ?? by two applications of modus ponens.

Proposition 18.31. 1. ϕ ∨ ψ, ¬ ϕ, ¬ψ is inconsistent.

2. Both ϕ ` ϕ ∨ ψ and ψ ` ϕ ∨ ψ.

Proof. 1. From ?? we get ` ¬ ϕ → ( ϕ → ⊥) and ` ¬ ϕ → ( ϕ → ⊥). So by


the deduction theorem, we have {¬ ϕ} ` ϕ → ⊥ and {¬ψ} ` ψ → ⊥.
From ?? we get {¬ ϕ, ¬ψ} ` ( ϕ ∨ ψ) → ⊥. By the deduction theorem,
{ ϕ ∨ ψ, ¬ ϕ, ¬ψ} ` ⊥.

252 Release: (None) ((None))


18.11. DERIVABILITY AND THE QUANTIFIERS

2. From ?? and ?? by modus ponsens.

Proposition 18.32. 1. ϕ, ϕ → ψ ` ψ.

2. Both ¬ ϕ ` ϕ → ψ and ψ ` ϕ → ψ.

Proof. 1. We can derive:

1. ϕ H YP
2. ϕ→ψ H YP
3. ψ 1, 2, MP

2. By ?? and ?? and the deduction theorem, respectively.

18.11 Derivability and the Quantifiers


Theorem 18.33. If c is a constant symbol not occurring in Γ or ϕ( x ) and Γ ` ϕ(c),
then Γ ` ∀ x ϕ( x ).

Proof. By the deduction theorem, Γ ` > → ϕ(c). Since c does not occur in Γ
or >, we get Γ ` > → ϕ(c). By the deduction theorem again, Γ ` ∀ x ϕ( x ).

Proposition 18.34. 1. ϕ(t) ` ∃ x ϕ( x ).

2. ∀ x ϕ( x ) ` ϕ(t).

Proof. 1. By ?? and the deduction theorem.

2. By ?? and the deduction theorem.

18.12 Soundness
A derivation system, such as axiomatic deduction, is sound if it cannot de-
rive things that do not actually hold. Soundness is thus a kind of guaranteed
safety property for derivation systems. Depending on which proof theoretic
property is in question, we would like to know for instance, that

1. every derivable ϕ is valid;

2. if ϕ is derivable from some others Γ, it is also a consequence of them;

3. if a set of formulas Γ is inconsistent, it is unsatisfiable.

Release: (None) ((None)) 253


CHAPTER 18. AXIOMATIC DERIVATIONS

These are important properties of a derivation system. If any of them do not


hold, the derivation system is deficient—it would derive too much. Conse-
quently, establishing the soundness of a derivation system is of the utmost
importance.

Proposition 18.35. If ϕ is an axiom, then M, s  ϕ for each structure M and as-


signment s.

Proof. We have to verify that all the axioms are valid. For instance, here is the
case for ??: suppose t is free for x in ϕ, and assume M, s  ∀ x ϕ. Then by
definition of satisfaction, for each s0 ∼ x s, also M, s0  ϕ, and in particular
this holds when s0 ( x ) = ValM s ( t ). By ??, M, s  ϕ [ t/x ]. This shows that
M, s  (∀ x ϕ → ϕ[t/x ]).

Theorem 18.36 (Soundness). If Γ ` ϕ then Γ  ϕ.

Proof. By induction on the length of the derivation of ϕ from Γ. If there are


no steps justified by inferences, then all formulas in the derivation are either
instances of axioms or are in Γ. By the previous proposition, all the axioms
are valid, and hence if ϕ is an axiom then Γ  ϕ. If ϕ ∈ Γ, then trivially Γ  ϕ.
If the last step of the derivation of ϕ is justified by modus ponens, then
there are formulas ψ and ψ → ϕ in the derivation, and the induction hypoth-
esis applies to the part of the derivation ending in those formulas (since they
contain at least one fewer steps justified by an inference). So, by induction
hypothesis, Γ  ψ and Γ  ψ → ϕ. Then Γ  ϕ by ??.
Now suppose the last step is justified by QR. Then that step has the form
χ → ∀ x B( x ) and there is a preceding step χ → ψ(c) with c not in Γ, χ, or
∀ x B( x ). By induction hypothesis, Γ  χ → ∀ x B( x ). By ??, Γ ∪ {χ}  ψ(c).
Consider some structure M such that M  Γ ∪ {χ}. We need to show that
M  ∀ x ψ( x ). Since ∀ x ψ( x ) is a sentence, this means we have to show that for
every variable assignment s, M, s  ψ( x ) (??). Since Γ ∪ {χ} consists entirely
0
of sentences, M, s  θ for all θ ∈ Γ by ??. Let M0 be like M except that cM =
s( x ). Since c does not occur in Γ or χ, M0  Γ ∪ {χ} by ??. Since Γ ∪ {χ} 
ψ(c), M0  B(c). Since ψ(c) is a sentence, M, s  ψ(c) by ??. M0 , s  ψ( x ) iff
M0  ψ(c) by ?? (recall that ψ(c) is just ψ( x )[c/x ]). So, M0 , s  ψ( x ). Since
c does not occur in ψ( x ), by ??, M, s  ψ( x ). But s was an arbitrary variable
assignment, so M  ∀ x ψ( x ). Thus Γ ∪ {χ}  ∀ x ψ( x ). By ??, Γ  χ → ∀ x ψ( x ).
The case where ϕ is justified by QR but is of the form ∃ x ψ( x ) → χ is left as
an exercise.

Corollary 18.37. If ` ϕ, then ϕ is valid.

Corollary 18.38. If Γ is satisfiable, then it is consistent.

254 Release: (None) ((None))


18.13. DERIVATIONS WITH IDENTITY PREDICATE

Proof. We prove the contrapositive. Suppose that Γ is not consistent. Then


Γ ` ⊥, i.e., there is a derivation of ⊥ from Γ. By ??, any structure M that
satisfies Γ must satisfy ⊥. Since M 2 ⊥ for every structure M, no M can
satisfy Γ, i.e., Γ is not satisfiable.

18.13 Derivations with Identity predicate


In order to accommodate = in derivations, we simply add new axiom schemas.
The definition of derivation and ` remains the same, we just also allow the
new axioms.

Definition 18.39 (Axioms for identity predicate).

t = t, (18.17)
t1 = t2 → (ψ(t1 ) → ψ(t2 )), (18.18)

for any ground terms t, t1 , t2 .

Proposition 18.40. The axioms ?? and ?? are valid.

Proof. Exercise.

Proposition 18.41. Γ ` t = t, for any term t and set Γ.

Proposition 18.42. If Γ ` ϕ(t1 ) and Γ ` t1 = t2 , then Γ ` ϕ(t2 ).

Proof. The formula


(t1 = t2 → ( ϕ(t1 ) → ϕ(t2 )))
is an instance of ??. The conclusion follows by two applications of MP.

Problems
Problem 18.1. Show that the following hold by exhibiting derivations from
the axioms:

1. ( ϕ ∧ ψ) → (ψ ∧ ϕ)

2. (( ϕ ∧ ψ) → χ) → ( ϕ → (ψ → χ))

3. ¬( ϕ ∨ ψ) → ¬ ϕ

Problem 18.2. Prove ??.

Problem 18.3. Prove ??

Problem 18.4. Complete the proof of ??.

Problem 18.5. Prove that Γ ` ¬ ϕ iff Γ ∪ { ϕ} is inconsistent.

Release: (None) ((None)) 255


CHAPTER 18. AXIOMATIC DERIVATIONS

Problem 18.6. Prove ??

Problem 18.7. Complete the proof of ??.

Problem 18.8. Prove ??.

256 Release: (None) ((None))


Chapter 19

The Completeness Theorem

19.1 Introduction
The completeness theorem is one of the most fundamental results about logic.
It comes in two formulations, the equivalence of which we’ll prove. In its first
formulation it says something fundamental about the relationship between
semantic consequence and our proof system: if a sentence ϕ follows from
some sentences Γ, then there is also a derivation that establishes Γ ` ϕ. Thus,
the proof system is as strong as it can possibly be without proving things that
don’t actually follow.
In its second formulation, it can be stated as a model existence result: ev-
ery consistent set of sentences is satisfiable. Consistency is a proof-theoretic
notion: it says that our proof system is unable to produce certain derivations.
But who’s to say that just because there are no derivations of a certain sort
from Γ, it’s guaranteed that there is a structure M? Before the completeness
theorem was first proved—in fact before we had the proof systems we now
do—the great German mathematician David Hilbert held the view that con-
sistency of mathematical theories guarantees the existence of the objects they
are about. He put it as follows in a letter to Gottlob Frege:

If the arbitrarily given axioms do not contradict one another with


all their consequences, then they are true and the things defined by
the axioms exist. This is for me the criterion of truth and existence.

Frege vehemently disagreed. The second formulation of the completeness the-


orem shows that Hilbert was right in at least the sense that if the axioms are
consistent, then some structure exists that makes them all true.
These aren’t the only reasons the completeness theorem—or rather, its
proof—is important. It has a number of important consequences, some of
which we’ll discuss separately. For instance, since any derivation that shows
Γ ` ϕ is finite and so can only use finitely many of the sentences in Γ, it fol-
lows by the completeness theorem that if ϕ is a consequence of Γ, it is already

257
CHAPTER 19. THE COMPLETENESS THEOREM

a consequence of a finite subset of Γ. This is called compactness. Equivalently,


if every finite subset of Γ is consistent, then Γ itself must be consistent.
Although the compactness theorem follows from the completeness theo-
rem via the detour through derivations, it is also possible to use the the proof
of the completeness theorem to establish it directly. For what the proof does is
take a set of sentences with a certain property—consistency—and constructs
a structure out of this set that has certain properties (in this case, that it satisfies
the set). Almost the very same construction can be used to directly establish
compactness, by starting from “finitely satisfiable” sets of sentences instead
of consistent ones. The construction also yields other consequences, e.g., that
any satisfiable set of sentences has a finite or denumerable model. (This re-
sult is called the Löwenheim-Skolem theorem.) In general, the construction of
structures from sets of sentences is used often in logic, and sometimes even in
philosophy.

19.2 Outline of the Proof

The proof of the completeness theorem is a bit complex, and upon first reading
it, it is easy to get lost. So let us outline the proof. The first step is a shift of
perspective, that allows us to see a route to a proof. When completeness is
thought of as “whenever Γ  ϕ then Γ ` ϕ,” it may be hard to even come up
with an idea: for to show that Γ ` ϕ we have to find a derivation, and it does
not look like the hypothesis that Γ  ϕ helps us for this in any way. For some
proof systems it is possible to directly construct a derivation, but we will take
a slightly different tack. The shift in perspective required is this: completeness
can also be formulated as: “if Γ is consistent, it has a model.” Perhaps we can
use the information in Γ together with the hypothesis that it is consistent to
construct a model. After all, we know what kind of model we are looking for:
one that is as Γ describes it!
If Γ contains only atomic sentences, it is easy to construct a model for it.
Suppose the atomic sentences are all of the form P( a1 , . . . , an ) where the ai
are constant symbols. All we have to do is come up with a domain |M| and
an assignment for P so that M  P( a1 , . . . , an ). But that’s not very hard: put
|M| = N, ciM = i, and for every P( a1 , . . . , an ) ∈ Γ, put the tuple hk1 , . . . , k n i
into PM , where k i is the index of the constant symbol ai (i.e., ai ≡ cki ).
Now suppose Γ contains some formula ¬ψ, with ψ atomic. We might
worry that the construction of M interferes with the possibility of making ¬ψ
true. But here’s where the consistency of Γ comes in: if ¬ψ ∈ Γ, then ψ ∈ / Γ, or
else Γ would be inconsistent. And if ψ ∈ / Γ, then according to our construction
of M, M 2 ψ, so M  ¬ψ. So far so good.
What if Γ contains complex, non-atomic formulas? Say it contains ϕ ∧ ψ.
To make that true, we should proceed as if both ϕ and ψ were in Γ. And if

258 Release: (None) ((None))


19.2. OUTLINE OF THE PROOF

ϕ ∨ ψ ∈ Γ, then we will have to make at least one of them true, i.e., proceed
as if one of them was in Γ.
This suggests the following idea: we add additional formulas to Γ so as to
(a) keep the resulting set consistent and (b) make sure that for every possible
atomic sentence ϕ, either ϕ is in the resulting set, or ¬ ϕ is, and (c) such that,
whenever ϕ ∧ ψ is in the set, so are both ϕ and ψ, if ϕ ∨ ψ is in the set, at least
one of ϕ or ψ is also, etc. We keep doing this (potentially forever). Call the set
of all formulas so added Γ ∗ . Then our construction above would provide us
with a structure M for which we could prove, by induction, that all sentences
in Γ ∗ are true in it, and hence also all sentence in Γ since Γ ⊆ Γ ∗ . It turns
out that guaranteeing (a) and (b) is enough. A set of sentences for which (b)
holds is called complete. So our task will be to extend the consistent set Γ to a
consistent and complete set Γ ∗ .
There is one wrinkle in this plan: if ∃ x ϕ( x ) ∈ Γ we would hope to be able
to pick some constant symbol c and add ϕ(c) in this process. But how do we
know we can always do that? Perhaps we only have a few constant symbols
in our language, and for each one of them we have ¬ ϕ(c) ∈ Γ. We can’t also
add ϕ(c), since this would make the set inconsistent, and we wouldn’t know
whether M has to make ϕ(c) or ¬ ϕ(c) true. Moreover, it might happen that Γ
contains only sentences in a language that has no constant symbols at all (e.g.,
the language of set theory).
The solution to this problem is to simply add infinitely many constants at
the beginning, plus sentences that connect them with the quantifiers in the
right way. (Of course, we have to verify that this cannot introduce an incon-
sistency.)
Our original construction works well if we only have constant symbols in
the atomic sentences. But the language might also contain function symbols.
In that case, it might be tricky to find the right functions on N to assign to
these function symbols to make everything work. So here’s another trick: in-
stead of using i to interpret ci , just take the set of constant symbols itself as
the domain. Then M can assign every constant symbol to itself: ciM = ci . But
why not go all the way: let |M| be all terms of the language! If we do this,
there is an obvious assignment of functions (that take terms as arguments and
have terms as values) to function symbols: we assign to the function sym-
bol fin the function which, given n terms t1 , . . . , tn as input, produces the term
fin (t1 , . . . , tn ) as value.
The last piece of the puzzle is what to do with =. The predicate symbol =
has a fixed interpretation: M  t = t0 iff ValM (t) = ValM (t0 ). Now if we set
things up so that the value of a term t is t itself, then this structure will make
no sentence of the form t = t0 true unless t and t0 are one and the same term.
And of course this is a problem, since basically every interesting theory in a
language with function symbols will have as theorems sentences t = t0 where
t and t0 are not the same term (e.g., in theories of arithmetic: ( + ) = ). To

Release: (None) ((None)) 259


CHAPTER 19. THE COMPLETENESS THEOREM

solve this problem, we change the domain of M: instead of using terms as the
objects in |M|, we use sets of terms, and each set is so that it contains all those
terms which the sentences in Γ require to be equal. So, e.g., if Γ is a theory of
arithmetic, one of these sets will contain: , ( + ), ( × ), etc. This will be
the set we assign to , and it will turn out that this set is also the value of all
the terms in it, e.g., also of ( + ). Therefore, the sentence ( + ) =  will be
true in this revised structure.
So here’s what we’ll do. First we investigate the properties of complete
consistent sets, in particular we prove that a complete consistent set contains
ϕ ∧ ψ iff it contains both ϕ and ψ, ϕ ∨ ψ iff it contains at least one of them, etc.
(??). Then we define and investigate “saturated” sets of sentences. A saturated
set is one which contains conditionals that link each quantified sentence to
instances of it (??). We show that any consistent set Γ can always be extended
to a saturated set Γ 0 (??). If a set is consistent, saturated, and complete it also
has the property that it contains ∃ x ϕ( x ) iff it contains ϕ(t) for some closed
term t and ∀ x ϕ( x ) iff it contains ϕ(t) for all closed terms t (??). We’ll then take
the saturated consistent set Γ 0 and show that it can be extended to a saturated,
consistent, and complete set Γ ∗ (??). This set Γ ∗ is what we’ll use to define
our term model M( Γ ∗ ). The term model has the set of closed terms as its
domain, and the interpretation of its predicate symbols is given by the atomic
sentences in Γ ∗ (??). We’ll use the properties of saturated, complete consistent
sets to show that indeed M( Γ ∗ )  ϕ iff ϕ ∈ Γ ∗ (??), and thus in particular,
M( Γ ∗ )  Γ. Finally, we’ll consider how to define a term model if Γ contains =
as well (??) and show that it satisfies Γ ∗ (??).

19.3 Complete Consistent Sets of Sentences


Definition 19.1 (Complete set). A set Γ of sentences is complete iff for any
sentence ϕ, either ϕ ∈ Γ or ¬ ϕ ∈ Γ.

Complete sets of sentences leave no questions unanswered. For any sen-


tence A, Γ “says” if ϕ is true or false. The importance of complete sets extends
beyond the proof of the completeness theorem. A theory which is complete
and axiomatizable, for instance, is always decidable.
Complete consistent sets are important in the completeness proof since we
can guarantee that every consistent set of sentences Γ is contained in a com-
plete consistent set Γ ∗ . A complete consistent set contains, for each sentence ϕ,
either ϕ or its negation ¬ ϕ, but not both. This is true in particular for atomic
sentences, so from a complete consistent set in a language suitably expanded
by constant symbols, we can construct a structure where the interpretation of
predicate symbols is defined according to which atomic sentences are in Γ ∗ .
This structure can then be shown to make all sentences in Γ ∗ (and hence also
all those in Γ) true. The proof of this latter fact requires that ¬ ϕ ∈ Γ ∗ iff
ϕ∈ / Γ ∗ , ( ϕ ∨ ψ) ∈ Γ ∗ iff ϕ ∈ Γ ∗ or ψ ∈ Γ ∗ , etc.

260 Release: (None) ((None))


19.4. HENKIN EXPANSION

In what follows, we will often tacitly use the properties of reflexivity, mono-
tonicity, and transitivity of ` (see ??????????????).

Proposition 19.2. Suppose Γ is complete and consistent. Then:

1. If Γ ` ϕ, then ϕ ∈ Γ.

2. ϕ ∧ ψ ∈ Γ iff both ϕ ∈ Γ and ψ ∈ Γ.

3. ϕ ∨ ψ ∈ Γ iff either ϕ ∈ Γ or ψ ∈ Γ.

4. ϕ → ψ ∈ Γ iff either ϕ ∈
/ Γ or ψ ∈ Γ.

Proof. Let us suppose for all of the following that Γ is complete and consistent.

1. If Γ ` ϕ, then ϕ ∈ Γ.
Suppose that Γ ` ϕ. Suppose to the contrary that ϕ ∈ / Γ. Since Γ is
complete, ¬ ϕ ∈ Γ. By ??????????????, Γ is inconsistent. This contradicts
the assumption that Γ is consistent. Hence, it cannot be the case that
ϕ∈/ Γ, so ϕ ∈ Γ.

2. Exercise.

3. First we show that if ϕ ∨ ψ ∈ Γ, then either ϕ ∈ Γ or ψ ∈ Γ. Suppose


ϕ ∨ ψ ∈ Γ but ϕ ∈ / Γ and ψ ∈ / Γ. Since Γ is complete, ¬ ϕ ∈ Γ and
¬ψ ∈ Γ. By ??????????????, item (1), Γ is inconsistent, a contradiction.
Hence, either ϕ ∈ Γ or ψ ∈ Γ.
For the reverse direction, suppose that ϕ ∈ Γ or ψ ∈ Γ. By ??????????????,
item (2), Γ ` ϕ ∨ ψ. By ??, ϕ ∨ ψ ∈ Γ, as required.

4. Exercise.

19.4 Henkin Expansion


Part of the challenge in proving the completeness theorem is that the model
we construct from a complete consistent set Γ must make all the quantified
formulas in Γ true. In order to guarantee this, we use a trick due to Leon
Henkin. In essence, the trick consists in expanding the language by infinitely
many constant symbols and adding, for each formula with one free variable
ϕ( x ) a formula of the form ∃ x ϕ → ϕ(c), where c is one of the new constant
symbols. When we construct the structure satisfying Γ, this will guarantee
that each true existential sentence has a witness among the new constants.

Proposition 19.3. If Γ is consistent in L and L0 is obtained from L by adding


a denumerable set of new constant symbols d0 , d1 , . . . , then Γ is consistent in L0 .

Release: (None) ((None)) 261


CHAPTER 19. THE COMPLETENESS THEOREM

Definition 19.4 (Saturated set). A set Γ of formulas of a language L is saturated


iff for each formula ϕ( x ) ∈ Frm(L) with one free variable x there is a constant
symbol c ∈ L such that ∃ x ϕ( x ) → ϕ(c) ∈ Γ.

The following definition will be used in the proof of the next theorem.

Definition 19.5. Let L0 be as in ??. Fix an enumeration ϕ0 ( x0 ), ϕ1 ( x1 ), . . . of


all formulas ϕi ( xi ) of L0 in which one variable (xi ) occurs free. We define the
sentences θn by induction on n.
Let c0 be the first constant symbol among the di we added to L which does
not occur in ϕ0 ( x0 ). Assuming that θ0 , . . . , θn−1 have already been defined,
let cn be the first among the new constant symbols di that occurs neither in θ0 ,
. . . , θn−1 nor in ϕn ( xn ).
Now let θn be the formula ∃ xn ϕn ( xn ) → ϕn (cn ).

Lemma 19.6. Every consistent set Γ can be extended to a saturated consistent set Γ 0 .

Proof. Given a consistent set of sentences Γ in a language L, expand the lan-


guage by adding a denumerable set of new constant symbols to form L0 . By
??, Γ is still consistent in the richer language. Further, let θi be as in ??. Let

Γ0 = Γ
Γn+1 = Γn ∪ {θn }

i.e., Γn+1 = Γ ∪ {θ0 , . . . , θn }, and let Γ 0 = n Γn . Γ 0 is clearly saturated.


S

If Γ 0 were inconsistent, then for some n, Γn would be inconsistent (Exercise:


explain why). So to show that Γ 0 is consistent it suffices to show, by induction
on n, that each set Γn is consistent.
The induction basis is simply the claim that Γ0 = Γ is consistent, which
is the hypothesis of the theorem. For the induction step, suppose that Γn is
consistent but Γn+1 = Γn ∪ {θn } is inconsistent. Recall that θn is ∃ xn ϕn ( xn ) →
ϕn (cn ), where ϕn ( xn ) is a formula of L0 with only the variable xn free. By the
way we’ve chosen the cn (see ??), cn does not occur in An ( xn ) nor in Γn .
If Γn ∪ {θn } is inconsistent, then Γn ` ¬θn , and hence both of the following
hold:
Γn ` ∃ xn ϕn ( xn ) Γn ` ¬ ϕn (cn )
Since cn does not occur in Γn or in ϕn ( xn ), ?????????????? applies. From Γn `
¬ ϕn (cn ), we obtain Γn ` ∀ xn ¬ ϕn ( xn ). Thus we have that both Γn ` ∃ xn ϕn
and Γn ` ∀ xn ¬ ϕn ( xn ), so Γn itself is inconsistent. (Note that ∀ xn ¬ ϕn ( xn ) `
¬∃ xn ϕn ( xn ).) Contradiction: Γn was supposed to be consistent. Hence Γn ∪
{θn } is consistent.

We’ll now show that complete, consistent sets which are saturated have the
property that it contains a universally quantified sentence iff it contains all its
instances and it contains an existentially quantified sentence iff it contains at

262 Release: (None) ((None))


19.5. LINDENBAUM’S LEMMA

least one instance. We’ll use this to show that the structure we’ll generate from
a complete, consistent, saturated set makes all its quantified sentences true.

Proposition 19.7. Suppose Γ is complete, consistent, and saturated.

1. ∃ x ϕ( x ) ∈ Γ iff ϕ(t) ∈ Γ for at least one closed term t.

2. ∀ x ϕ( x ) ∈ Γ iff ϕ(t) ∈ Γ for all closed terms t.

Proof. 1. First suppose that ∃ x ϕ( x ) ∈ Γ. Because Γ is saturated, (∃ x ϕ( x ) →


ϕ(c)) ∈ Γ for some constant symbol c. By ??????????????, item (1), and
????, ϕ(c) ∈ Γ.
For the other direction, saturation is not necessary: Suppose ϕ(t) ∈ Γ.
Then Γ ` ∃ x ϕ( x ) by ??????????????, item (1). By ????, ∃ x ϕ( x ) ∈ Γ.

2. Exercise.

19.5 Lindenbaum’s Lemma


We now prove a lemma that shows that any consistent set of sentences is con-
tained in some set of sentences which is not just consistent, but also complete.
The proof works by adding one sentence at a time, guaranteeing at each step
that the set remains consistent. We do this so that for every ϕ, either ϕ or ¬ ϕ
gets added at some stage. The union of all stages in that construction then
contains either ϕ or its negation ¬ ϕ and is thus complete. It is also consistent,
since we made sure at each stage not to introduce an inconsistency.

Lemma 19.8 (Lindenbaum’s Lemma). Every consistent set Γ in a language L can


be extended to a complete and consistent set Γ ∗ .

Proof. Let Γ be consistent. Let ϕ0 , ϕ1 , . . . be an enumeration of all the sen-


tences of L. Define Γ0 = Γ, and
(
Γn ∪ { ϕn } if Γn ∪ { ϕn } is consistent;
Γn+1 =
Γn ∪ {¬ ϕn } otherwise.

Let Γ ∗ = n≥0 Γn .
S

Each Γn is consistent: Γ0 is consistent by definition. If Γn+1 = Γn ∪ { ϕn },


this is because the latter is consistent. If it isn’t, Γn+1 = Γn ∪ {¬ ϕn }. We have
to verify that Γn ∪ {¬ ϕn } is consistent. Suppose it’s not. Then both Γn ∪ { ϕn }
and Γn ∪ {¬ ϕn } are inconsistent. This means that Γn would be inconsistent by
??????????????, contrary to the induction hypothesis.
For every n and every i < n, Γi ⊆ Γn . This follows by a simple induction
on n. For n = 0, there are no i < 0, so the claim holds automatically. For
the inductive step, suppose it is true for n. We have Γn+1 = Γn ∪ { ϕn } or

Release: (None) ((None)) 263


CHAPTER 19. THE COMPLETENESS THEOREM

= Γn ∪ {¬ ϕn } by construction. So Γn ⊆ Γn+1 . If i < n, then Γi ⊆ Γn by


inductive hypothesis, and so ⊆ Γn+1 by transitivity of ⊆.
From this it follows that every finite subset of Γ ∗ is a subset of Γn for
some n, since each ψ ∈ Γ ∗ not already in Γ0 is added at some stage i. If n
is the last one of these, then all ψ in the finite subset are in Γn . So, every finite
subset of Γ ∗ is consistent. By ??????????????, Γ ∗ is consistent.
Every sentence of Frm(L) appears on the list used to define Γ ∗ . If ϕn ∈ / Γ∗ ,
then that is because Γn ∪ { ϕn } was inconsistent. But then ¬ ϕn ∈ Γ , so Γ ∗ is

complete.

19.6 Construction of a Model


Right now we are not concerned about =, i.e., we only want to show that a
consistent set Γ of sentences not containing = is satisfiable. We first extend Γ
to a consistent, complete, and saturated set Γ ∗ . In this case, the definition of a
model M( Γ ∗ ) is simple: We take the set of closed terms of L0 as the domain.
We assign every constant symbol to itself, and make sure that more generally,

for every closed term t, ValM( Γ ) (t) = t. The predicate symbols are assigned
extensions in such a way that an atomic sentence is true in M( Γ ∗ ) iff it is
in Γ ∗ . This will obviously make all the atomic sentences in Γ ∗ true in M( Γ ∗ ).
The rest are true provided the Γ ∗ we start with is consistent, complete, and
saturated.

Definition 19.9 (Term model). Let Γ ∗ be a complete and consistent, saturated


set of sentences in a language L. The term model M( Γ ∗ ) of Γ ∗ is the structure
defined as follows:

1. The domain |M( Γ ∗ )| is the set of all closed terms of L.


∗)
2. The interpretation of a constant symbol c is c itself: cM( Γ = c.

3. The function symbol f is assigned the function which, given as argu-


ments the closed terms t1 , . . . , tn , has as value the closed term f (t1 , . . . , tn ):

f M( Γ ) ( t 1 , . . . , t n ) = f ( t 1 , . . . , t n )

4. If R is an n-place predicate symbol, then



ht1 , . . . , tn i ∈ RM( Γ ) iff R(t1 , . . . , tn ) ∈ Γ ∗ .

A structure M may make an existentially quantified sentence ∃ x ϕ( x ) true


without there being an instance ϕ(t) that it makes true. A structure M may
make all instances ϕ(t) of a universally quantified sentence ∀ x ϕ( x ) true, with-
out making ∀ x ϕ( x ) true. This is because in general not every element of |M|

264 Release: (None) ((None))


19.6. CONSTRUCTION OF A MODEL

is the value of a closed term (M may not be covered). This is the reason the sat-
isfaction relation is defined via variable assignments. However, for our term
model M( Γ ∗ ) this wouldn’t be necessary—because it is covered. This is the
content of the next result.
Proposition 19.10. Let M( Γ ∗ ) be the term model of ??.
1. M( Γ ∗ )  ∃ x ϕ( x ) iff M  ϕ(t) for at least one term t.
2. M( Γ ∗ )  ∀ x ϕ( x ) iff M  ϕ(t) for all terms t.
Proof. 1. By ??, M( Γ ∗ )  ∃ x ϕ( x ) iff for at least one variable assignment s,
M( Γ ∗ ), s  ϕ( x ). As |M( Γ ∗ )| consists of the closed terms of L, this is
the case iff there is at least one closed term t such that s( x ) = t and
M( Γ ∗ ), s  ϕ( x ). By ??, M( Γ ∗ ), s  ϕ( x ) iff M( Γ ∗ ), s  ϕ(t), where
s( x ) = t. By ??, M( Γ ∗ ), s  ϕ(t) iff M( Γ ∗ )  ϕ(t), since ϕ(t) is a sen-
tence.
2. Exercise.

Lemma 19.11 (Truth Lemma). Suppose ϕ does not contain =. Then M( Γ ∗ )  ϕ


iff ϕ ∈ Γ ∗ .
Proof. We prove both directions simultaneously, and by induction on ϕ.
1. ϕ ≡ ⊥: M( Γ ∗ ) 2 ⊥ by definition of satisfaction. On the other hand,
⊥∈/ Γ ∗ since Γ ∗ is consistent.

2. ϕ ≡ R(t1 , . . . , tn ): M( Γ ∗ )  R(t1 , . . . , tn ) iff ht1 , . . . , tn i ∈ RM( Γ ) (by
the definition of satisfaction) iff R(t1 , . . . , tn ) ∈ Γ ∗ (by the construction
of M( Γ ∗ )).
3. ϕ ≡ ¬ψ: M( Γ ∗ )  ϕ iff M( Γ ∗ ) 2 ψ (by definition of satisfaction). By
induction hypothesis, M( Γ ∗ ) 2 ψ iff ψ ∈
/ Γ ∗ . Since Γ ∗ is consistent and
complete, ψ ∈ ∗
/ Γ iff ¬ψ ∈ Γ .∗

4. ϕ ≡ ψ ∧ χ: exercise.
5. ϕ ≡ ψ ∨ χ: M( Γ ∗ )  ϕ iff at M( Γ ∗ )  ψ or M( Γ ∗ )  χ (by definition of
satisfaction) iff ψ ∈ Γ ∗ or χ ∈ Γ ∗ (by induction hypothesis). This is the
case iff (ψ ∨ χ) ∈ Γ ∗ (by ????).
6. ϕ ≡ ψ → χ: exercise.
7. ϕ ≡ ∀ x ψ( x ): exercise.
8. ϕ ≡ ∃ x ψ( x ): M( Γ ∗ )  ϕ iff M( Γ ∗ )  ψ(t) for at least one term t (??). By
induction hypothesis, this is the case iff ψ(t) ∈ Γ ∗ for at least one term t.
By ??, this in turn is the case iff ∃ x ϕ( x ) ∈ Γ ∗ .

Release: (None) ((None)) 265


CHAPTER 19. THE COMPLETENESS THEOREM

19.7 Identity
The construction of the term model given in the preceding section is enough
to establish completeness for first-order logic for sets Γ that do not contain =.
The term model satisfies every ϕ ∈ Γ ∗ which does not contain = (and hence
all ϕ ∈ Γ). It does not work, however, if = is present. The reason is that Γ ∗
then may contain a sentence t = t0 , but in the term model the value of any
term is that term itself. Hence, if t and t0 are different terms, their values in
the term model—i.e., t and t0 , respectively—are different, and so t = t0 is false.
We can fix this, however, using a construction known as “factoring.”

Definition 19.12. Let Γ ∗ be a consistent and complete set of sentences in L.


We define the relation ≈ on the set of closed terms of L by

t ≈ t0 iff t = t0 ∈ Γ ∗

Proposition 19.13. The relation ≈ has the following properties:

1. ≈ is reflexive.

2. ≈ is symmetric.

3. ≈ is transitive.

4. If t ≈ t0 , f is a function symbol, and t1 , . . . , ti−1 , ti+1 , . . . , tn are terms, then

f (t1 , . . . , ti−1 , t, ti+1 , . . . , tn ) ≈ f (t1 , . . . , ti−1 , t0 , ti+1 , . . . , tn ).

5. If t ≈ t0 , R is a predicate symbol, and t1 , . . . , ti−1 , ti+1 , . . . , tn are terms, then

R(t1 , . . . , ti−1 , t, ti+1 , . . . , tn ) ∈ Γ ∗ iff


R ( t 1 , . . . , t i −1 , t 0 , t i +1 , . . . , t n ) ∈ Γ ∗ .

Proof. Since Γ ∗ is consistent and complete, t = t0 ∈ Γ ∗ iff Γ ∗ ` t = t0 . Thus it


is enough to show the following:

1. Γ ∗ ` t = t for all terms t.

2. If Γ ∗ ` t = t0 then Γ ∗ ` t0 = t.

3. If Γ ∗ ` t = t0 and Γ ∗ ` t0 = t00 , then Γ ∗ ` t = t00 .

4. If Γ ∗ ` t = t0 , then

Γ ∗ ` f (t1 , . . . , ti−1 , t, ti+1 , , . . . , tn ) = f (t1 , . . . , ti−1 , t0 , ti+1 , . . . , tn )

for every n-place function symbol f and terms t1 , . . . , ti−1 , ti+1 , . . . , tn .

266 Release: (None) ((None))


19.7. IDENTITY

5. If Γ ∗ ` t = t0 and Γ ∗ ` R(t1 , . . . , ti−1 , t, ti+1 , . . . , tn ), then Γ ∗ ` R(t1 , . . . , ti−1 , t0 , ti+1 , . . . , tn )


for every n-place predicate symbol R and terms t1 , . . . , ti−1 , ti+1 , . . . , tn .

Definition 19.14. Suppose Γ ∗ is a consistent and complete set in a language L,


t is a term, and ≈ as in the previous definition. Then:

[t]≈ = {t0 : t0 ∈ Trm(L), t ≈ t0 }

and Trm(L)/≈ = {[t]≈ : t ∈ Trm(L)}.

Definition 19.15. Let M = M( Γ ∗ ) be the term model for Γ ∗ . Then M/≈ is the
following structure:

1. |M/≈ | = Trm(L)/≈ .

2. cM/≈ = [c]≈

3. f M/≈ ([t1 ]≈ , . . . , [tn ]≈ ) = [ f (t1 , . . . , tn )]≈

4. h[t1 ]≈ , . . . , [tn ]≈ i ∈ RM/≈ iff M  R(t1 , . . . , tn ).

Note that we have defined f M/≈ and RM/≈ for elements of Trm(L)/≈ by
referring to them as [t]≈ , i.e., via representatives t ∈ [t]≈ . We have to make sure
that these definitions do not depend on the choice of these representatives, i.e.,
that for some other choices t0 which determine the same equivalence classes
([t]≈ = [t0 ]≈ ), the definitions yield the same result. For instance, if R is a one-
place predicate symbol, the last clause of the definition says that [t]≈ ∈ RM/≈
iff M  R(t). If for some other term t0 with t ≈ t0 , M 2 R(t), then the definition
would require [t0 ]≈ ∈ / RM/≈ . If t ≈ t0 , then [t]≈ = [t0 ]≈ , but we can’t have
both [t]≈ ∈ RM/≈ and [t]≈ ∈ / RM/≈ . However, ?? guarantees that this cannot
happen.

Proposition 19.16. M/≈ is well defined, i.e., if t1 , . . . , tn , t10 , . . . , t0n are terms, and
ti ≈ ti0 then

1. [ f (t1 , . . . , tn )]≈ = [ f (t10 , . . . , t0n )]≈ , i.e.,

f (t1 , . . . , tn ) ≈ f (t10 , . . . , t0n )

and

2. M  R(t1 , . . . , tn ) iff M  R(t10 , . . . , t0n ), i.e.,

R(t1 , . . . , tn ) ∈ Γ ∗ iff R(t10 , . . . , t0n ) ∈ Γ ∗ .

Proof. Follows from ?? by induction on n.

Release: (None) ((None)) 267


CHAPTER 19. THE COMPLETENESS THEOREM

Lemma 19.17. M/≈  ϕ iff ϕ ∈ Γ ∗ for all sentences ϕ.

Proof. By induction on ϕ, just as in the proof of ??. The only case that needs
additional attention is when ϕ ≡ t = t0 .

M/≈  t = t0 iff [t]≈ = [t0 ]≈ (by definition of M/≈ )


iff t ≈ t0 (by definition of [t]≈ )
iff t = t0 ∈ Γ ∗ (by definition of ≈).

Note that while M( Γ ∗ ) is always enumerable and infinite, M/≈ may be


finite, since it may turn out that there are only finitely many classes [t]≈ . This
is to be expected, since Γ may contain sentences which require any structure
in which they are true to be finite. For instance, ∀ x ∀y x = y is a consistent
sentence, but is satisfied only in structures with a domain that contains exactly
one element.

19.8 The Completeness Theorem


Let’s combine our results: we arrive at the completeness theorem.

Theorem 19.18 (Completeness Theorem). Let Γ be a set of sentences. If Γ is


consistent, it is satisfiable.

Proof. Suppose Γ is consistent. By ??, there is a saturated consistent set Γ 0 ⊇ Γ.


By ??, there is a Γ ∗ ⊇ Γ 0 which is consistent and complete. Since Γ 0 ⊆ Γ ∗ , for
each sentence ϕ, Γ ∗ contains a sentence of the form ∃ x ϕ → ϕ(c) and so Γ ∗ is
saturated. If Γ does not contain =, then by ??, M( Γ ∗ )  ϕ iff ϕ ∈ Γ ∗ . From
this it follows in particular that for all ϕ ∈ Γ, M( Γ ∗ )  ϕ, so Γ is satisfiable.
If Γ does contain =, then by ??, M/≈  ϕ iff ϕ ∈ Γ ∗ for all sentences ϕ. In
particular, M/≈  ϕ for all ϕ ∈ Γ, so Γ is satisfiable.

Corollary 19.19 (Completeness Theorem, Second Version). For all Γ and ϕ sen-
tences: if Γ  ϕ then Γ ` ϕ.

Proof. Note that the Γ’s in ?? and ?? are universally quantified. To make sure
we do not confuse ourselves, let us restate ?? using a different variable: for
any set of sentences ∆, if ∆ is consistent, it is satisfiable. By contraposition, if ∆
is not satisfiable, then ∆ is inconsistent. We will use this to prove the corollary.
Suppose that Γ  ϕ. Then Γ ∪ {¬ ϕ} is unsatisfiable by ??. Taking Γ ∪ {¬ ϕ}
as our ∆, the previous version of ?? gives us that Γ ∪ {¬ ϕ} is inconsistent. By
??????????????, Γ ` ϕ.

268 Release: (None) ((None))


19.9. THE COMPACTNESS THEOREM

19.9 The Compactness Theorem


One important consequence of the completeness theorem is the compactness
theorem. The compactness theorem states that if each finite subset of a set
of sentences is satisfiable, the entire set is satisfiable—even if the set itself is
infinite. This is far from obvious. There is nothing that seems to rule out,
at first glance at least, the possibility of there being infinite sets of sentences
which are contradictory, but the contradiction only arises, so to speak, from
the infinite number. The compactness theorem says that such a scenario can
be ruled out: there are no unsatisfiable infinite sets of sentences each finite
subset of which is satisfiable. Like the completeness theorem, it has a version
related to entailment: if an infinite set of sentences entails something, already
a finite subset does.

Definition 19.20. A set Γ of formulas is finitely satisfiable if and only if every


finite Γ0 ⊆ Γ is satisfiable.

Theorem 19.21 (Compactness Theorem). The following hold for any sentences Γ
and ϕ:

1. Γ  ϕ iff there is a finite Γ0 ⊆ Γ such that Γ0  ϕ.

2. Γ is satisfiable if and only if it is finitely satisfiable.

Proof. We prove (2). If Γ is satisfiable, then there is a structure M such that


M  ϕ for all ϕ ∈ Γ. Of course, this M also satisfies every finite subset of Γ,
so Γ is finitely satisfiable.
Now suppose that Γ is finitely satisfiable. Then every finite subset Γ0 ⊆ Γ
is satisfiable. By soundness (??????????????), every finite subset is consistent.
Then Γ itself must be consistent by ??????????????. By completeness (??), since
Γ is consistent, it is satisfiable.

Example 19.22. In every model M of a theory Γ, each term t of course picks


out an element of |M|. Can we guarantee that it is also true that every element
of |M| is picked out by some term or other? In other words, are there theo-
ries Γ all models of which are covered? The compactness theorem shows that
this is not the case if Γ has infinite models. Here’s how to see this: Let M be
an infinite model of Γ, and let c be a constant symbol not in the language of Γ.
Let ∆ be the set of all sentences c 6= t for t a term in the language L of Γ, i.e.,

∆ = {c 6= t : t ∈ Trm(L)}.

A finite subset of Γ ∪ ∆ can be written as Γ 0 ∪ ∆0 , with Γ 0 ⊆ Γ and ∆0 ⊆ ∆. Since


∆0 is finite, it can contain only finitely many terms. Let a ∈ |M| be an element
of |M| not picked out by any of them, and let M0 be the structure that is just
0
like M, but also cM = a. Since a 6= ValM (t) for all t occuring in ∆0 , M0  ∆0 .

Release: (None) ((None)) 269


CHAPTER 19. THE COMPLETENESS THEOREM

Since M  Γ, Γ 0 ⊆ Γ, and c does not occur in Γ, also M0  Γ 0 . Together,


M0  Γ 0 ∪ ∆0 for every finite subset Γ 0 ∪ ∆0 of Γ ∪ ∆. So every finite subset
of Γ ∪ ∆ is satisfiable. By compactness, Γ ∪ ∆ itself is satisfiable. So there are
models M  Γ ∪ ∆. Every such M is a model of Γ, but is not covered, since
ValM (c) 6= ValM (t) for all terms t of L.

Example 19.23. Consider a language L containing the predicate symbol <,


constant symbols , , and function symbols +, ×, −, ÷. Let Γ be the set
of all sentences in this language true in Q with domain Q and the obvious
interpretations. Γ is the set of all sentences of L true about the rational num-
bers. Of course, in Q (and even in R), there are no numbers which are greater
than 0 but less than 1/k for all k ∈ Z+ . Such a number, if it existed, would
be an infinitesimal: non-zero, but infinitely small. The compactness theorem
shows that there are models of Γ in which infinitesimals exist: Let ∆ be {0 <
c} ∪ {c < ( ÷ k) : k ∈ Z+ } (where k = ( + ( + · · · + ( + ) . . . )) with
k ’s). For any finite subset ∆ 0 of ∆ there is a K such that all the sentences
0
c < k in ∆ 0 have k < K. If we expand Q to Q0 with cQ = 1/K we have
0
that Q  Γ ∪ ∆ 0 , and so Γ ∪ ∆ is finitely satisfiable (Exercise: prove this in
detail). By compactness, Γ ∪ ∆ is satisfiable. Any model S of Γ ∪ ∆ contains
an infinitesimal, namely cS .

Example 19.24. We know that first-order logic with identity predicate can
express that the size of the domain must have some minimal size: The sen-
tence ϕ≥n (which says “there are at least n distinct objects”) is true only in
structures where |M| has at least n objects. So if we take

∆ = { ϕ ≥ n : n ≥ 1}

then any model of ∆ must be infinite. Thus, we can guarantee that a theory
only has infinite models by adding ∆ to it: the models of Γ ∪ ∆ are all and only
the infinite models of Γ.
So first-order logic can express infinitude. The compactness theorem shows
that it cannot express finitude, however. For suppose some set of sentences Λ
were satisfied in all and only finite structures. Then ∆ ∪ Λ is finitely satisfiable.
Why? Suppose ∆0 ∪ Λ0 ⊆ ∆ ∪ Λ is finite with ∆0 ⊆ ∆ and Λ0 ⊆ Λ. Let n be the
largest number such that ϕ≥n ∈ ∆0 . Λ, being satisfied in all finite structures,
has a model M with finitely many but ≥ n elements. But then M  ∆0 ∪ Λ0 . By
compactness, ∆ ∪ Λ has an infinite model, contradicting the assumption that
Λ is satisfied only in finite structures.

19.10 A Direct Proof of the Compactness Theorem


We can prove the Compactness Theorem directly, without appealing to the
Completeness Theorem, using the same ideas as in the proof of the complete-
ness theorem. In the proof of the Completeness Theorem we started with a

270 Release: (None) ((None))


19.11. THE LÖWENHEIM-SKOLEM THEOREM

consistent set Γ of sentences, expanded it to a consistent, saturated, and com-


plete set Γ ∗ of sentences, and then showed that in the term model M( Γ ∗ )
constructed from Γ ∗ , all sentences of Γ are true, so Γ is satisfiable.
We can use the same method to show that a finitely satisfiable set of sen-
tences is satisfiable. We just have to prove the corresponding versions of
the results leading to the truth lemma where we replace “consistent” with
“finitely satisfiable.”

Proposition 19.25. Suppose Γ is complete and finitely satisfiable. Then:

1. ( ϕ ∧ ψ) ∈ Γ iff both ϕ ∈ Γ and ψ ∈ Γ.

2. ( ϕ ∨ ψ) ∈ Γ iff either ϕ ∈ Γ or ψ ∈ Γ.

3. ( ϕ → ψ) ∈ Γ iff either ϕ ∈
/ Γ or ψ ∈ Γ.

Lemma 19.26. Every finitely satisfiable set Γ can be extended to a saturated finitely
satisfiable set Γ 0 .

Proposition 19.27. Suppose Γ is complete, finitely satisfiable, and saturated.

1. ∃ x ϕ( x ) ∈ Γ iff ϕ(t) ∈ Γ for at least one closed term t.

2. ∀ x ϕ( x ) ∈ Γ iff ϕ(t) ∈ Γ for all closed terms t.

Lemma 19.28. Every finitely satisfiable set Γ can be extended to a complete and
finitely satisfiable set Γ ∗ .

Theorem 19.29 (Compactness). Γ is satisfiable if and only if it is finitely satisfiable.

Proof. If Γ is satisfiable, then there is a structure M such that M  ϕ for all


ϕ ∈ Γ. Of course, this M also satisfies every finite subset of Γ, so Γ is finitely
satisfiable.
Now suppose that Γ is finitely satisfiable. By ??, there is a finitely satisfi-
able, saturated set Γ 0 ⊇ Γ. By ??, Γ 0 can be extended to a complete and finitely
satisfiable set Γ ∗ , and Γ ∗ is still saturated. Construct the term model M( Γ ∗ )
as in ??. Note that ?? did not rely on the fact that Γ ∗ is consistent (or complete
or saturated, for that matter), but just on the fact that M( Γ ∗ ) is covered. The
proof of the Truth Lemma (??) goes through if we replace references to ?? and
?? by references to ?? and ??

19.11 The Löwenheim-Skolem Theorem


The Löwenheim-Skolem Theorem says that if a theory has an infinite model,
then it also has a model that is at most denumerable. An immediate con-
sequene of this fact is that first-order logic cannot express that the size of
a structure is non-enumerable: any sentence or set of sentences satisfied in
all non-enumerable structures is also satisfied in some enumerable structure.

Release: (None) ((None)) 271


CHAPTER 19. THE COMPLETENESS THEOREM

Theorem 19.30. If Γ is consistent then it has an enumerable model, i.e., it is satisfi-


able in a structure whose domain is either finite or denumerable.

Proof. If Γ is consistent, the structure M delivered by the proof of the com-


pleteness theorem has a domain |M| that is no larger than the set of the terms
of the language L. So M is at most denumerable.

Theorem 19.31. If Γ is consistent set of sentences in the language of first-order logic


without identity, then it has a denumerable model, i.e., it is satisfiable in a structure
whose domain is infinite and enumerable.

Proof. If Γ is consistent and contains no sentences in which identity appears,


then the structure M delivered by the proof of the completness theorem has a
domain |M| identical to the set of terms of the language L0 . So M is denumer-
able, since Trm(L0 ) is.

Example 19.32 (Skolem’s Paradox). Zermelo-Fraenkel set theory ZFC is a


very powerful framework in which practically all mathematical statements
can be expressed, including facts about the sizes of sets. So for instance, ZFC
can prove that the set R of real numbers is non-enumerable, it can prove Can-
tor’s Theorem that the power set of any set is larger than the set itself, etc. If
ZFC is consistent, its models are all infinite, and moreover, they all contain
elements about which the theory says that they are non-enumerable, such as
the element that makes true the theorem of ZFC that the power set of the
natural numbers exists. By the Löwenheim-Skolem Theorem, ZFC also has
enumerable models—models that contain “non-enumerable” sets but which
themselves are enumerable.

Problems
Problem 19.1. Complete the proof of ??.

Problem 19.2. Complete the proof of ??.

Problem 19.3. Complete the proof of ??.

Problem 19.4. Complete the proof of ??.

Problem 19.5. Use ?? to prove ??, thus showing that the two formulations of
the completeness theorem are equivalent.

Problem 19.6. In order for a derivation system to be complete, its rules must
be strong enough to prove every unsatisfiable set inconsistent. Which of the
rules of derivation were necessary to prove completeness? Are any of these
rules not used anywhere in the proof? In order to answer these questions,
make a list or diagram that shows which of the rules of derivation were used

272 Release: (None) ((None))


19.11. THE LÖWENHEIM-SKOLEM THEOREM

in which results that lead up to the proof of ??. Be sure to note any tacit uses
of rules in these proofs.

Problem 19.7. Prove (1) of ??.

Problem 19.8. In the standard model of arithmetic N, there is no element k ∈


|N| which satisfies every formula n < x (where n is 0...0 with n 0’s). Use
the compactness theorem to show that the set of sentences in the language of
arithmetic which are true in the standard model of arithmetic N are also true
in a structure N0 that contains an element which does satisfy every formula
n < x.

Problem 19.9. Prove ??. Avoid the use of `.

Problem 19.10. Prove ??. (Hint: The crucial step is to show that if Γn is finitely
satisfiable, so is Γn ∪ {θn }, without any appeal to derivations or consistency.)

Problem 19.11. Prove ??.

Problem 19.12. Prove ??. (Hint: the crucial step is to show that if Γn is finitely
satisfiable, then either Γn ∪ { ϕn } or Γn ∪ {¬ ϕn } is finitely satisfiable.)

Problem 19.13. Write out the complete proof of the Truth Lemma (??) in the
version required for the proof of ??.

Release: (None) ((None)) 273


Chapter 20

Beyond First-order Logic

This chapter, adapted from Jeremy Avigad’s logic notes, gives the
briefest of glimpses into which other logical systems there are. It is in-
tended as a chapter suggesting further topics for study in a course that
does not cover them. Each one of the topics mentioned here will—
hopefully—eventually receive its own part-level treatment in the Open
Logic Project.

20.1 Overview
First-order logic is not the only system of logic of interest: there are many ex-
tensions and variations of first-order logic. A logic typically consists of the
formal specification of a language, usually, but not always, a deductive sys-
tem, and usually, but not always, an intended semantics. But the technical use
of the term raises an obvious question: what do logics that are not first-order
logic have to do with the word “logic,” used in the intuitive or philosophical
sense? All of the systems described below are designed to model reasoning of
some form or another; can we say what makes them logical?
No easy answers are forthcoming. The word “logic” is used in different
ways and in different contexts, and the notion, like that of “truth,” has been
analyzed from numerous philosophical stances. For example, one might take
the goal of logical reasoning to be the determination of which statements are
necessarily true, true a priori, true independent of the interpretation of the
nonlogical terms, true by virtue of their form, or true by linguistic convention;
and each of these conceptions requires a good deal of clarification. Even if one
restricts one’s attention to the kind of logic used in mathematics, there is little
agreement as to its scope. For example, in the Principia Mathematica, Russell
and Whitehead tried to develop mathematics on the basis of logic, in the logi-
cist tradition begun by Frege. Their system of logic was a form of higher-type

274
20.2. MANY-SORTED LOGIC

logic similar to the one described below. In the end they were forced to intro-
duce axioms which, by most standards, do not seem purely logical (notably,
the axiom of infinity, and the axiom of reducibility), but one might nonetheless
hold that some forms of higher-order reasoning should be accepted as logical.
In contrast, Quine, whose ontology does not admit “propositions” as legiti-
mate objects of discourse, argues that second-order and higher-order logic are
really manifestations of set theory in sheep’s clothing; in other words, systems
involving quantification over predicates are not purely logical.
For now, it is best to leave such philosophical issues for a rainy day, and
simply think of the systems below as formal idealizations of various kinds of
reasoning, logical or otherwise.

20.2 Many-Sorted Logic


In first-order logic, variables and quantifiers range over a single domain. But
it is often useful to have multiple (disjoint) domains: for example, you might
want to have a domain of numbers, a domain of geometric objects, a domain
of functions from numbers to numbers, a domain of abelian groups, and so
on.
Many-sorted logic provides this kind of framework. One starts with a list
of “sorts”—the “sort” of an object indicates the “domain” it is supposed to
inhabit. One then has variables and quantifiers for each sort, and (usually)
an identity predicate for each sort. Functions and relations are also “typed”
by the sorts of objects they can take as arguments. Otherwise, one keeps the
usual rules of first-order logic, with versions of the quantifier-rules repeated
for each sort.
For example, to study international relations we might choose a language
with two sorts of objects, French citizens and German citizens. We might have
a unary relation, “drinks wine,” for objects of the first sort; another unary
relation, “eats wurst,” for objects of the second sort; and a binary relation,
“forms a multinational married couple,” which takes two arguments, where
the first argument is of the first sort and the second argument is of the second
sort. If we use variables a, b, c to range over French citizens and x, y, z to range
over German citizens, then

∀ a ∀ x [(Mar r i edT o ( a, x ) → (Dr i nksW i ne ( a) ∨ ¬EatsW ur st ( x ))]]

asserts that if any French person is married to a German, either the French
person drinks wine or the German doesn’t eat wurst.
Many-sorted logic can be embedded in first-order logic in a natural way,
by lumping all the objects of the many-sorted domains together into one first-
order domain, using unary predicate symbols to keep track of the sorts, and
relativizing quantifiers. For example, the first-order language corresponding
to the example above would have unary predicate symbolss “Ger man” and

Release: (None) ((None)) 275


CHAPTER 20. BEYOND FIRST-ORDER LOGIC

“F r ench,” in addition to the other relations described, with the sort require-
ments erased. A sorted quantifier ∀ x ϕ, where x is a variable of the German
sort, translates to
∀ x (Ger man ( x ) → ϕ).
We need to add axioms that insure that the sorts are separate—e.g., ∀ x ¬(Ger man ( x ) ∧
F r ench ( x ))—as well as axioms that guarantee that “drinks wine” only holds
of objects satisfying the predicate F r ench ( x ), etc. With these conventions and
axioms, it is not difficult to show that many-sorted sentences translate to first-
order sentences, and many-sorted derivations translate to first-order deriva-
tions. Also, many-sorted structures “translate” to corresponding first-order
structures and vice-versa, so we also have a completeness theorem for many-
sorted logic.

20.3 Second-Order logic


The language of second-order logic allows one to quantify not just over a do-
main of individuals, but over relations on that domain as well. Given a first-
order language L, for each k one adds variables R which range over k-ary
relations, and allows quantification over those variables. If R is a variable for
a k-ary relation, and t1 , . . . , tk are ordinary (first-order) terms, R(t1 , . . . , tk ) is
an atomic formula. Otherwise, the set of formulas is defined just as in the
case of first-order logic, with additional clauses for second-order quantifica-
tion. Note that we only have the identity predicate for first-order terms: if R
and S are relation variables of the same arity k, we can define R = S to be an
abbreviation for

∀ x1 . . . ∀ xk ( R( x1 , . . . , xk ) ↔ S( x1 , . . . , xk )).

The rules for second-order logic simply extend the quantifier rules to the
new second order variables. Here, however, one has to be a little bit careful
to explain how these variables interact with the predicate symbols of L, and
with formulas of L more generally. At the bare minimum, relation variables
count as terms, so one has inferences of the form

ϕ( R) ` ∃ R ϕ( R)

But if L is the language of arithmetic with a constant relation symbol <, one
would also expect the following inference to be valid:

x < y ` ∃ R R( x, y)

or for a given formula ϕ,

ϕ ( x1 , . . . , x k ) ` ∃ R R ( x1 , . . . , x k )

276 Release: (None) ((None))


20.3. SECOND-ORDER LOGIC

More generally, we might want to allow inferences of the form

ϕ[λ~x. ψ(~x )/R] ` ∃ R ϕ

where ϕ[λ~x. ψ(~x )/R] denotes the result of replacing every atomic formula of
the form Rt1 , . . . , tk in ϕ by ψ(t1 , . . . , tk ). This last rule is equivalent to having
a comprehension schema, i.e., an axiom of the form

∃ R ∀ x1 , . . . , xk ( ϕ( x1 , . . . , xk ) ↔ R( x1 , . . . , xk )),

one for each formula ϕ in the second-order language, in which R is not a free
variable. (Exercise: show that if R is allowed to occur in ϕ, this schema is
inconsistent!)
When logicians refer to the “axioms of second-order logic” they usually
mean the minimal extension of first-order logic by second-order quantifier
rules together with the comprehension schema. But it is often interesting to
study weaker subsystems of these axioms and rules. For example, note that
in its full generality the axiom schema of comprehension is impredicative: it
allows one to assert the existence of a relation R( x1 , . . . , xk ) that is “defined”
by a formula with second-order quantifiers; and these quantifiers range over
the set of all such relations—a set which includes R itself! Around the turn of
the twentieth century, a common reaction to Russell’s paradox was to lay the
blame on such definitions, and to avoid them in developing the foundations
of mathematics. If one prohibits the use of second-order quantifiers in the
formula ϕ, one has a predicative form of comprehension, which is somewhat
weaker.
From the semantic point of view, one can think of a second-order structure
as consisting of a first-order structure for the language, coupled with a set of
relations on the domain over which the second-order quantifiers range (more
precisely, for each k there is a set of relations of arity k). Of course, if compre-
hension is included in the proof system, then we have the added requirement
that there are enough relations in the “second-order part” to satisfy the com-
prehension axioms—otherwise the proof system is not sound! One easy way
to insure that there are enough relations around is to take the second-order
part to consist of all the relations on the first-order part. Such a structure is
called full, and, in a sense, is really the “intended structure” for the language.
If we restrict our attention to full structures we have what is known as the
full second-order semantics. In that case, specifying a structure boils down
to specifying the first-order part, since the contents of the second-order part
follow from that implicitly.
To summarize, there is some ambiguity when talking about second-order
logic. In terms of the proof system, one might have in mind either

1. A “minimal” second-order proof system, together with some compre-


hension axioms.

Release: (None) ((None)) 277


CHAPTER 20. BEYOND FIRST-ORDER LOGIC

2. The “standard” second-order proof system, with full comprehension.

In terms of the semantics, one might be interested in either

1. The “weak” semantics, where a structure consists of a first-order part,


together with a second-order part big enough to satisfy the comprehen-
sion axioms.

2. The “standard” second-order semantics, in which one considers full struc-


tures only.

When logicians do not specify the proof system or the semantics they have
in mind, they are usually refering to the second item on each list. The ad-
vantage to using this semantics is that, as we will see, it gives us categorical
descriptions of many natural mathematical structures; at the same time, the
proof system is quite strong, and sound for this semantics. The drawback is
that the proof system is not complete for the semantics; in fact, no effectively
given proof system is complete for the full second-order semantics. On the
other hand, we will see that the proof system is complete for the weakened
semantics; this implies that if a sentence is not provable, then there is some
structure, not necessarily the full one, in which it is false.
The language of second-order logic is quite rich. One can identify unary
relations with subsets of the domain, and so in particular you can quantify
over these sets; for example, one can express induction for the natural num-
bers with a single axiom

∀ R (( R() ∧ ∀ x ( R( x ) → R( x 0 ))) → ∀ x R( x )).

If one takes the language of arithmetic to have symbols , 0, +, × and <, one
can add the following axioms to describe their behavior:

1. ∀ x ¬ x 0 = 

2. ∀ x ∀y (s( x ) = s(y) → x = y)

3. ∀ x ( x + ) = x

4. ∀ x ∀y ( x + y0 ) = ( x + y)0

5. ∀ x ( x × ) = 

6. ∀ x ∀y ( x × y0 ) = (( x × y) + x )

7. ∀ x ∀y ( x < y ↔ ∃z y = ( x + z0 ))

It is not difficult to show that these axioms, together with the axiom of induc-
tion above, provide a categorical description of the structure N, the standard
model of arithmetic, provided we are using the full second-order semantics.
Given any structure M in which these axioms are true, define a function f

278 Release: (None) ((None))


20.3. SECOND-ORDER LOGIC

from N to the domain of M using ordinary recursion on N, so that f (0) = M


and f ( x + 1) = 0M ( f ( x )). Using ordinary induction on N and the fact that ax-
ioms (1) and (2) hold in M, we see that f is injective. To see that f is surjective,
let P be the set of elements of |M| that are in the range of f . Since M is full, P is
in the second-order domain. By the construction of f , we know that M is in P,
and that P is closed under 0M . The fact that the induction axiom holds in M
(in particular, for P) guarantees that P is equal to the entire first-order domain
of M. This shows that f is a bijection. Showing that f is a homomorphism is
no more difficult, using ordinary induction on N repeatedly.
In set-theoretic terms, a function is just a special kind of relation; for ex-
ample, a unary function f can be identified with a binary relation R satisfying
∀ x ∃y R( x, y). As a result, one can quantify over functions too. Using the full
semantics, one can then define the class of infinite structures to be the class of
structures M for which there is an injective function from the domain of M to
a proper subset of itself:

∃ f (∀ x ∀y ( f ( x ) = f (y) → x = y) ∧ ∃y ∀ x f ( x ) 6= y).

The negation of this sentence then defines the class of finite structures.
In addition, one can define the class of well-orderings, by adding the fol-
lowing to the definition of a linear ordering:

∀ P (∃ x P( x ) → ∃ x ( P( x ) ∧ ∀y (y < x → ¬ P(y)))).

This asserts that every non-empty set has a least element, modulo the iden-
tification of “set” with “one-place relation”. For another example, one can
express the notion of connectedness for graphs, by saying that there is no non-
trivial separation of the vertices into disconnected parts:

¬∃ A (∃ x A( x ) ∧ ∃y ¬ A(y) ∧ ∀w ∀z (( A(w) ∧ ¬ A(z)) → ¬ R(w, z))).

For yet another example, you might try as an exercise to define the class of
finite structures whose domain has even size. More strikingly, one can pro-
vide a categorical description of the real numbers as a complete ordered field
containing the rationals.
In short, second-order logic is much more expressive than first-order logic.
That’s the good news; now for the bad. We have already mentioned that there
is no effective proof system that is complete for the full second-order seman-
tics. For better or for worse, many of the properties of first-order logic are
absent, including compactness and the Löwenheim-Skolem theorems.
On the other hand, if one is willing to give up the full second-order seman-
tics in terms of the weaker one, then the minimal second-order proof system
is complete for this semantics. In other words, if we read ` as “proves in the
minimal system” and  as “logically implies in the weaker semantics”, we
can show that whenever Γ  ϕ then Γ ` ϕ. If one wants to include specific

Release: (None) ((None)) 279


CHAPTER 20. BEYOND FIRST-ORDER LOGIC

comprehension axioms in the proof system, one has to restrict the semantics
to second-order structures that satisfy these axioms: for example, if ∆ con-
sists of a set of comprehension axioms (possibly all of them), we have that if
Γ ∪ ∆  ϕ, then Γ ∪ ∆ ` ϕ. In particular, if ϕ is not provable using the com-
prehension axioms we are considering, then there is a model of ¬ ϕ in which
these comprehension axioms nonetheless hold.
The easiest way to see that the completeness theorem holds for the weaker
semantics is to think of second-order logic as a many-sorted logic, as follows.
One sort is interpreted as the ordinary “first-order” domain, and then for each
k we have a domain of “relations of arity k.” We take the language to have
built-in relation symbols “tr ue k ( R, x1 , . . . , xk )” which is meant to assert that
R holds of x1 , . . . , xk , where R is a variable of the sort “k-ary relation” and x1 ,
. . . , xk are objects of the first-order sort.
With this identification, the weak second-order semantics is essentially the
usual semantics for many-sorted logic; and we have already observed that
many-sorted logic can be embedded in first-order logic. Modulo the trans-
lations back and forth, then, the weaker conception of second-order logic is
really a form of first-order logic in disguise, where the domain contains both
“objects” and “relations” governed by the appropriate axioms.

20.4 Higher-Order logic


Passing from first-order logic to second-order logic enabled us to talk about
sets of objects in the first-order domain, within the formal language. Why stop
there? For example, third-order logic should enable us to deal with sets of sets
of objects, or perhaps even sets which contain both objects and sets of objects.
And fourth-order logic will let us talk about sets of objects of that kind. As
you may have guessed, one can iterate this idea arbitrarily.
In practice, higher-order logic is often formulated in terms of functions
instead of relations. (Modulo the natural identifications, this difference is
inessential.) Given some basic “sorts” A, B, C, . . . (which we will now call
“types”), we can create new ones by stipulating

If σ and τ are finite types then so is σ → τ.

Think of types as syntactic “labels,” which classify the objects we want in our
domain; σ → τ describes those objects that are functions which take objects of
type σ to objects of type τ. For example, we might want to have a type Ω of
truth values, “true” and “false,” and a type N of natural numbers. In that case,
you can think of objects of type N → Ω as unary relations, or subsets of N;
objects of type N → N are functions from natural numers to natural numbers;
and objects of type (N → N) → N are “functionals,” that is, higher-type
functions that take functions to numbers.

280 Release: (None) ((None))


20.4. HIGHER-ORDER LOGIC

As in the case of second-order logic, one can think of higher-order logic as


a kind of many-sorted logic, where there is a sort for each type of object we
want to consider. But it is usually clearer just to define the syntax of higher-
type logic from the ground up. For example, we can define a set of finite types
inductively, as follows:

1. N is a finite type.

2. If σ and τ are finite types, then so is σ → τ.

3. If σ and τ are finite types, so is σ × τ.

Intuitively, N denotes the type of the natural numbers, σ → τ denotes the


type of functions from σ to τ, and σ × τ denotes the type of pairs of objects,
one from σ and one from τ. We can then define a set of terms inductively, as
follows:

1. For each type σ, there is a stock of variables x, y, z, . . . of type σ

2.  is a term of type N

3. S (successor) is a term of type N → N

4. If s is a term of type σ, and t is a term of type N → (σ → σ ), then Rst is


a term of type N → σ

5. If s is a term of type τ → σ and t is a term of type τ, then s(t) is a term


of type σ

6. If s is a term of type σ and x is a variable of type τ, then λx. s is a term of


type τ → σ.

7. If s is a term of type σ and t is a term of type τ, then hs, ti is a term of


type σ × τ.

8. If s is a term of type σ × τ then p1 (s) is a term of type σ and p2 (s) is a


term of type τ.

Intuitively, Rst denotes the function defined recursively by

Rst (0) = s
Rst ( x + 1) = t( x, Rst ( x )),

hs, ti denotes the pair whose first component is s and whose second compo-
nent is t, and p1 (s) and p2 (s) denote the first and second elements (“projec-
tions”) of s. Finally, λx. s denotes the function f defined by

f (x) = s

Release: (None) ((None)) 281


CHAPTER 20. BEYOND FIRST-ORDER LOGIC

for any x of type σ; so item (6) gives us a form of comprehension, enabling us


to define functions using terms. Formulas are built up from identity predicate
statements s = t between terms of the same type, the usual propositional
connectives, and higher-type quantification. One can then take the axioms
of the system to be the basic equations governing the terms defined above,
together with the usual rules of logic with quantifiers and identity predicate.
If one augments the finite type system with a type Ω of truth values, one
has to include axioms which govern its use as well. In fact, if one is clever, one
can get rid of complex formulas entirely, replacing them with terms of type Ω!
The proof system can then be modified accordingly. The result is essentially
the simple theory of types set forth by Alonzo Church in the 1930s.
As in the case of second-order logic, there are different versions of higher-
type semantics that one might want to use. In the full version, variables of
type σ → τ range over the set of all functions from the objects of type σ to
objects of type τ. As you might expect, this semantics is too strong to admit
a complete, effective proof system. But one can consider a weaker semantics,
in which a structure consists of sets of elements Tτ for each type τ, together
with appropriate operations for application, projection, etc. If the details are
carried out correctly, one can obtain completeness theorems for the kinds of
proof systems described above.
Higher-type logic is attractive because it provides a framework in which
we can embed a good deal of mathematics in a natural way: starting with N,
one can define real numbers, continuous functions, and so on. It is also partic-
ularly attractive in the context of intuitionistic logic, since the types have clear
“constructive” intepretations. In fact, one can develop constructive versions
of higher-type semantics (based on intuitionistic, rather than classical logic)
that clarify these constructive interpretations quite nicely, and are, in many
ways, more interesting than the classical counterparts.

20.5 Intuitionistic Logic

In constrast to second-order and higher-order logic, intuitionistic first-order


logic represents a restriction of the classical version, intended to model a more
“constructive” kind of reasoning. The following examples may serve to illus-
trate some of the underlying motivations.
Suppose someone came up to you one day and announced that they had
determined a natural number x, with the property that if x is prime, the Rie-
mann hypothesis is true, and if x is composite, the Riemann hypothesis is
false. Great news! Whether the Riemann hypothesis is true or not is one of
the big open questions of mathematics, and here they seem to have reduced
the problem to one of calculation, that is, to the determination of whether a
specific number is prime or not.

282 Release: (None) ((None))


20.5. INTUITIONISTIC LOGIC

What is the magic value of x? They describe it as follows: x is the natural


number that is equal to 7 if the Riemann hypothesis is true, and 9 otherwise.
Angrily, you demand your money back. From a classical point of view, the
description above does in fact determine a unique value of x; but what you
really want is a value of x that is given explicitly.
To take another, perhaps less contrived example, consider the following
question. We know that it is possible to raise an irrational number to a rational
√ 2
power, and get a rational result. For example, 2 = 2. What is less clear
is whether or not it is possible to raise an irrational number to an irrational
power, and get a rational result. The following theorem answers this in the
affirmative:

Theorem 20.1. There are irrational numbers a and b such that ab is rational.
√ √2 √
Proof. Consider 2 . If this is rational, we are done: we can let a = b = 2.
Otherwise, it is irrational. Then we have
√ √ √
√ 2 √2 √ 2· 2 √ 2
( 2 ) = 2 = 2 = 2,

√ 2 √
which is certainly rational. So, in this case, let a be 2 , and let b be 2.

Does this constitute a valid proof? Most mathematicians feel that it does.
But again, there is something a little bit unsatisfying here: we have proved the
existence of a pair of real numbers with a certain property, without being able
to say which pair of numbers it is. It is possible to prove the √
same result, but in
such a way that the pair a, b is given in the proof: take a = 3 and b = log3 4.
Then
√ log 4
ab = 3 3 = 31/2·log3 4 = (3log3 4 )1/2 = 41/2 = 2,
since 3log3 x = x.
Intuitionistic logic is designed to model a kind of reasoning where moves
like the one in the first proof are disallowed. Proving the existence of an x
satisfying ϕ( x ) means that you have to give a specific x, and a proof that it
satisfies ϕ, like in the second proof. Proving that ϕ or ψ holds requires that
you can prove one or the other.
Formally speaking, intuitionistic first-order logic is what you get if you
omit restrict a proof system for first-order logic in a certain way. Similarly,
there are intuitionistic versions of second-order or higher-order logic. From
the mathematical point of view, these are just formal deductive systems, but,
as already noted, they are intended to model a kind of mathematical reason-
ing. One can take this to be the kind of reasoning that is justified on a cer-
tain philosophical view of mathematics (such as Brouwer’s intuitionism); one
can take it to be a kind of mathematical reasoning which is more “concrete”
and satisfying (along the lines of Bishop’s constructivism); and one can argue

Release: (None) ((None)) 283


CHAPTER 20. BEYOND FIRST-ORDER LOGIC

about whether or not the formal description captures the informal motiva-
tion. But whatever philosophical positions we may hold, we can study intu-
itionistic logic as a formally presented logic; and for whatever reasons, many
mathematical logicians find it interesting to do so.
There is an informal constructive interpretation of the intuitionist connec-
tives, usually known as the Brouwer-Heyting-Kolmogorov interpretation. It
runs as follows: a proof of ϕ ∧ ψ consists of a proof of ϕ paired with a proof
of ψ; a proof of ϕ ∨ ψ consists of either a proof of ϕ, or a proof of ψ, where
we have explicit information as to which is the case; a proof of ϕ → ψ con-
sists of a procedure, which transforms a proof of ϕ to a proof of ψ; a proof of
∀ x ϕ( x ) consists of a procedure which returns a proof of ϕ( x ) for any value
of x; and a proof of ∃ x ϕ( x ) consists of a value of x, together with a proof that
this value satisfies ϕ. One can describe the interpretation in computational
terms known as the “Curry-Howard isomorphism” or the “formulas-as-types
paradigm”: think of a formula as specifying a certain kind of data type, and
proofs as computational objects of these data types that enable us to see that
the corresponding formula is true.
Intuitionistic logic is often thought of as being classical logic “minus” the
law of the excluded middle. This following theorem makes this more precise.

Theorem 20.2. Intuitionistically, the following axiom schemata are equivalent:

1. ( ϕ → ⊥) → ¬ ϕ.

2. ϕ ∨ ¬ ϕ

3. ¬¬ ϕ → ϕ

Obtaining instances of one schema from either of the others is a good ex-
ercise in intuitionistic logic.
The first deductive systems for intuitionistic propositional logic, put forth
as formalizations of Brouwer’s intuitionism, are due, independently, to Kol-
mogorov, Glivenko, and Heyting. The first formalization of intuitionistic first-
order logic (and parts of intuitionist mathematics) is due to Heyting. Though
a number of classically valid schemata are not intuitionistically valid, many
are.
The double-negation translation describes an important relationship between
classical and intuitionist logic. It is defined inductively follows (think of ϕ N

284 Release: (None) ((None))


20.5. INTUITIONISTIC LOGIC

as the “intuitionist” translation of the classical formula ϕ):

ϕ N ≡ ¬¬ ϕ for atomic formulas ϕ


( ϕ ∧ ψ) ≡ ( ϕ ∧ ψ N )
N N

( ϕ ∨ ψ) N ≡ ¬¬( ϕ N ∨ ψ N )
( ϕ → ψ) N ≡ ( ϕ N → ψ N )
(∀ x ϕ) N ≡ ∀ x ϕ N
(∃ x ϕ) N ≡ ¬¬∃ x ϕ N

Kolmogorov and Glivenko had versions of this translation for propositional


logic; for predicate logic, it is due to Gödel and Gentzen, independently. We
have

Theorem 20.3. 1. ϕ ↔ ϕ N is provable classically

2. If ϕ is provable classically, then ϕ N is provable intuitionistically.

We can now envision the following dialogue. Classical mathematician:


“I’ve proved ϕ!” Intuitionist mathematician: “Your proof isn’t valid. What
you’ve really proved is ϕ N .” Classical mathematician: “Fine by me!” As far as
the classical mathematician is concerned, the intuitionist is just splitting hairs,
since the two are equivalent. But the intuitionist insists there is a difference.
Note that the above translation concerns pure logic only; it does not ad-
dress the question as to what the appropriate nonlogical axioms are for classi-
cal and intuitionistic mathematics, or what the relationship is between them.
But the following slight extension of the theorem above provides some useful
information:

Theorem 20.4. If Γ proves ϕ classically, Γ N proves ϕ N intuitionistically.

In other words, if ϕ is provable from some hypotheses classically, then ϕ N


is provable from their double-negation translations.
To show that a sentence or propositional formula is intuitionistically valid,
all you have to do is provide a proof. But how can you show that it is not
valid? For that purpose, we need a semantics that is sound, and preferrably
complete. A semantics due to Kripke nicely fits the bill.
We can play the same game we did for classical logic: define the semantics,
and prove soundness and completeness. It is worthwhile, however, to note
the following distinction. In the case of classical logic, the semantics was the
“obvious” one, in a sense implicit in the meaning of the connectives. Though
one can provide some intuitive motivation for Kripke semantics, the latter
does not offer the same feeling of inevitability. In addition, the notion of a
classical structure is a natural mathematical one, so we can either take the
notion of a structure to be a tool for studying classical first-order logic, or take

Release: (None) ((None)) 285


CHAPTER 20. BEYOND FIRST-ORDER LOGIC

classical first-order logic to be a tool for studying mathematical structures.


In contrast, Kripke structures can only be viewed as a logical construct; they
don’t seem to have independent mathematical interest.
A Kripke structure M = hW, R, V i for a propositional language consists
of a set W, partial order R on W with a least element, and an “monotone” as-
signment of propositional variables to the elements of W. The intuition is that
the elements of W represent “worlds,” or “states of knowledge”; an element
v ≥ u represents a “possible future state” of u; and the propositional variables
assigned to u are the propositions that are known to be true in state u. The
forcing relation M, w ϕ then extends this relationship to arbitrary formulas
in the language; read M, w ϕ as “ϕ is true in state w.” The relationship is
defined inductively, as follows:

1. M, w pi iff pi is one of the propositional variables assigned to w.

2. M, w 1 ⊥.

3. M, w ( ϕ ∧ ψ) iff M, w ϕ and M, w ψ.

4. M, w ( ϕ ∨ ψ) iff M, w ϕ or M, w ψ.

5. M, w ( ϕ → ψ) iff, whenever w0 ≥ w and M, w0 ϕ, then M, w0 ψ.

It is a good exercise to try to show that ¬( p ∧ q) → (¬ p ∨ ¬q) is not intuition-


istically valid, by cooking up a Kripke structure that provides a counterexam-
ple.

20.6 Modal Logics


Consider the following example of a conditional sentence:

If Jeremy is alone in that room, then he is drunk and naked and


dancing on the chairs.

This is an example of a conditional assertion that may be materially true but


nonetheless misleading, since it seems to suggest that there is a stronger link
between the antecedent and conclusion other than simply that either the an-
tecedent is false or the consequent true. That is, the wording suggests that the
claim is not only true in this particular world (where it may be trivially true,
because Jeremy is not alone in the room), but that, moreover, the conclusion
would have been true had the antecedent been true. In other words, one can
take the assertion to mean that the claim is true not just in this world, but in
any “possible” world; or that it is necessarily true, as opposed to just true in
this particular world.
Modal logic was designed to make sense of this kind of necessity. One ob-
tains modal propositional logic from ordinary propositional logic by adding a

286 Release: (None) ((None))


20.6. MODAL LOGICS

box operator; which is to say, if ϕ is a formula, so is ϕ. Intuitively, ϕ asserts


that ϕ is necessarily true, or true in any possible world. ♦ϕ is usually taken to
be an abbreviation for ¬¬ ϕ, and can be read as asserting that ϕ is possibly
true. Of course, modality can be added to predicate logic as well.
Kripke structures can be used to provide a semantics for modal logic; in
fact, Kripke first designed this semantics with modal logic in mind. Rather
than restricting to partial orders, more generally one has a set of “possible
worlds,” P, and a binary “accessibility” relation R( x, y) between worlds. In-
tuitively, R( p, q) asserts that the world q is compatible with p; i.e., if we are
“in” world p, we have to entertain the possibility that the world could have
been like q.
Modal logic is sometimes called an “intensional” logic, as opposed to an
“extensional” one. The intended semantics for an extensional logic, like clas-
sical logic, will only refer to a single world, the “actual” one; while the seman-
tics for an “intensional” logic relies on a more elaborate ontology. In addition
to structureing necessity, one can use modality to structure other linguistic
constructions, reinterpreting  and ♦ according to the application. For exam-
ple:
1. In provability logic, ϕ is read “ϕ is provable” and ♦ϕ is read “ϕ is
consistent.”
2. In epistemic logic, one might read ϕ as “I know ϕ” or “I believe ϕ.”
3. In temporal logic, one can read ϕ as “ϕ is always true” and ♦ϕ as “ϕ is
sometimes true.”
One would like to augment logic with rules and axioms dealing with modal-
ity. For example, the system S4 consists of the ordinary axioms and rules of
propositional logic, together with the following axioms:

( ϕ → ψ) → (ϕ → ψ)
ϕ → ϕ
ϕ → ϕ

as well as a rule, “from ϕ conclude ϕ.” S5 adds the following axiom:

♦ϕ → ♦ϕ

Variations of these axioms may be suitable for different applications; for ex-
ample, S5 is usually taken to characterize the notion of logical necessity. And
the nice thing is that one can usually find a semantics for which the proof
system is sound and complete by restricting the accessibility relation in the
Kripke structures in natural ways. For example, S4 corresponds to the class
of Kripke structures in which the accessibility relation is reflexive and transi-
tive. S5 corresponds to the class of Kripke structures in which the accessibility

Release: (None) ((None)) 287


CHAPTER 20. BEYOND FIRST-ORDER LOGIC

relation is universal, which is to say that every world is accessible from every
other; so ϕ holds if and only if ϕ holds in every world.

20.7 Other Logics


As you may have gathered by now, it is not hard to design a new logic. You
too can create your own a syntax, make up a deductive system, and fashion
a semantics to go with it. You might have to be a bit clever if you want the
proof system to be complete for the semantics, and it might take some effort to
convince the world at large that your logic is truly interesting. But, in return,
you can enjoy hours of good, clean fun, exploring your logic’s mathematical
and computational properties.
Recent decades have witnessed a veritable explosion of formal logics. Fuzzy
logic is designed to model reasoning about vague properties. Probabilistic
logic is designed to model reasoning about uncertainty. Default logics and
nonmonotonic logics are designed to model defeasible forms of reasoning,
which is to say, “reasonable” inferences that can later be overturned in the face
of new information. There are epistemic logics, designed to model reasoning
about knowledge; causal logics, designed to model reasoning about causal re-
lationships; and even “deontic” logics, which are designed to model reason-
ing about moral and ethical obligations. Depending on whether the primary
motivation for introducing these systems is philosophical, mathematical, or
computational, you may find such creatures studies under the rubric of math-
ematical logic, philosophical logic, artificial intelligence, cognitive science, or
elsewhere.
The list goes on and on, and the possibilities seem endless. We may never
attain Leibniz’ dream of reducing all of human reason to calculation—but that
can’t stop us from trying.

288 Release: (None) ((None))


Part IV

Model Theory

289
CHAPTER 20. BEYOND FIRST-ORDER LOGIC

Material on model theory is incomplete and experimental. It is cur-


rently simply an adaptation of Aldo Antonelli’s notes on model theory,
less those topics covered in the part on first-order logic (theories, com-
pleteness, compactness). It requires much more introduction, motivation,
and explanation, as well as exercises, to be useful for a textbook. Andy
Arana is at planning to work on this part specifically (issue #65).

290 Release: (None) ((None))


Chapter 21

Basics of Model Theory

21.1 Reducts and Expansions


Often it is useful or necessary to compare languages which have symbols in
common, as well as structures for these languages. The most comon case
is when all the symbols in a language L are also part of a language L0 , i.e.,
L ⊆ L0 . An L-structure M can then always be expanded to an L0 -structure
by adding interpretations of the additional symbols while leaving the inter-
pretations of the common symbols the same. On the other hand, from an
L0 -structure M0 we can obtain an L-structure simpy by “forgetting” the inter-
pretations of the symbols that do not occur in L.

Definition 21.1. Suppose L ⊆ L0 , M is an L-structure and M0 is an L0 -


structure. M is the reduct of M0 to L, and M0 is an expansion of M to L0 iff

1. |M| = |M0 |
0
2. For every constant symbol c ∈ L, cM = cM .
0
3. For every function symbol f ∈ L, f M = f M .
0
4. For every predicate symbol P ∈ L, PM = PM .

Proposition 21.2. If an L-structure M is a reduct of an L0 -structure M0 , then for


all L-sentences ϕ,
M  ϕ iff M0  ϕ.

Proof. Exercise.

Definition 21.3. When we have an L-structure M, and L0 = L ∪ { P} is the


expansion of L obtained by adding a single n-place predicate symbol P, and
R ⊆ |M|n is an n-place relation, then we write (M, R) for the expansion M0
0
of M with PM = R.

291
CHAPTER 21. BASICS OF MODEL THEORY

21.2 Substructures
The domain of a structure M may be a subset of another M0 . But we should
obviously only consider M a “part” of M0 if not only |M| ⊆ |M0 |, but M and
M0 “agree” in how they interpret the symbols of the language at least on the
shared part |M|.

Definition 21.4. Given structures M and M0 for the same language L, we say
that M is a substructure of M0 , and M0 an extension of M, written M ⊆ M0 , iff

1. |M| ⊆ |M0 |,
0
2. For each constant c ∈ L, cM = cM ;
0
3. For each n-place predicate symbol f ∈ L f M ( a1 , . . . , an ) = f M ( a1 , . . . , an )
for all a1 , . . . , an ∈ |M|.

4. For each n-place predicate symbol R ∈ L, h a1 , . . . , an i ∈ RM iff h a1 , . . . , an i ∈


0
RM for all a1 , . . . , an ∈ |M|.

Remark 1. If the language contains no constant or function symbols, then any


N ⊆ |M| determines a substructure N of M with domain |N| = N by putting
RN = RM ∩ N n .

21.3 Overspill
Theorem 21.5. If a set Γ of sentences has arbitrarily large finite models, then it has
an infinite model.

Proof. Expand the language of Γ by adding countably many new constants c0 ,


c1 , . . . and consider the set Γ ∪ {ci 6= c j : i 6= j}. To say that Γ has arbitrarily
large finite models means that for every m > 0 there is n ≥ m such that Γ
has a model of cardinality n. This implies that Γ ∪ {ci 6= c j : i 6= j} is finitely
satisfiable. By compactness, Γ ∪ {ci 6= c j : i 6= j} has a model M whose
domain must be infinite, since it satisfies all inequalities ci 6= c j .

Proposition 21.6. There is no sentence ϕ of any first-order language that is true in


a structure M if and only if the domain |M| of the structure is infinite.

Proof. If there were such a ϕ, its negation ¬ ϕ would be true in all and only the
finite structures, and it would therefore have arbitrarily large finite models
but it would lack an infinite model, contradicting ??.

292 Release: (None) ((None))


21.4. ISOMORPHIC STRUCTURES

21.4 Isomorphic Structures


First-order structures can be alike in one of two ways. One way in which the
can be alike is that they make the same sentences true. We call such structures
elementarily equivalent. But structures can be very different and still make the
same sentences true—for instance, one can be enumerable and the other not.
This is because there are lots of features of a structure that cannot be expressed
in first-order languages, either because the language is not rich enough, or be-
cause of fundamental limitations of first-order logic such as the Löwenheim-
Skolem theorem. So another, stricter, aspect in which structures can be alike is
if they are fundamentally the same, in the sense that they only differ in the ob-
jects that make them up, but not in their structural features. A way of making
this precise is by the notion of an isomorphism.

Definition 21.7. Given two structures M and M0 for the same language L, we
say that M is elementarily equivalent to M0 , written M ≡ M0 , if and only if for
every sentence ϕ of L, M  ϕ iff M0  ϕ.

Definition 21.8. Given two structures M and M0 for the same language L,
we say that M is isomorphic to M0 , written M ' M0 , if and only if there is a
function h : |M| → |M0 | such that:

1. h is injective: if h( x ) = h(y) then x = y;

2. h is surjective: for every y ∈ |M0 | there is x ∈ |M| such that h( x ) = y;


0
3. for every constant symbol c: h(cM ) = cM ;

4. for every n-place predicate symbol P:


0
h a1 , . . . , a n i ∈ PM iff hh( a1 ), . . . , h( an )i ∈ PM ;

5. for every n-place function symbol f :


0
h( f M ( a1 , . . . , an )) = f M (h( a1 ), . . . , h( an )).

Theorem 21.9. If M ' M0 then M ≡ M0 .

Proof. Let h be an isomorphism of M onto M0 . For any assignment s, h ◦ s is


the composition of h and s, i.e., the assignment in M0 such that (h ◦ s)( x ) =
h(s( x )). By induction on t and ϕ one can prove the stronger claims:
0
a. h(ValM M
s ( t )) = Valh◦s ( t ).

b. M, s  ϕ iff M0 , h ◦ s  ϕ.

The first is proved by induction on the complexity of t.

Release: (None) ((None)) 293


CHAPTER 21. BASICS OF MODEL THEORY

0 0
1. If t ≡ c, then ValM
s (c) = c
M and ValM ( c ) = cM . Thus, h (ValM ( t )) =
h◦s s
0 0
h(cM ) = cM (by ?? of ??) = ValMh ◦ s ( t ).
0
2. If t ≡ x, then ValM M M
s ( x ) = s ( x ) and Valh◦s ( x ) = h ( s ( x )). Thus, h (Vals ( x )) =
0
h(s( x )) = ValM
h ◦ s ( x ).

3. If t ≡ f (t1 , . . . , tn ), then

ValM M M M
s ( t ) = f (Vals ( t1 ), . . . , Vals ( tn )) and
0 0 M0
ValM
h◦s (t ) = f M
(ValM
h◦s ( t1 ), . . . , Valh◦s ( tn )).

0
The induction hypothesis is that for each i, h(ValM M
s ( ti )) = Valh◦s ( ti ). So,

h(ValM M M M
s ( t )) = h ( f (Vals ( t1 ), . . . , Vals ( tn ))
0 0
= h( f M (ValM M
h◦s ( t1 ), . . . , Valh◦s ( tn )) (21.1)
M0 0 M0
= f (ValM h◦s ( t1 ), . . . , Valh◦s ( tn )) (21.2)
0
= ValM
h◦s (t )

Here, ?? follows by induction hypothesis and ?? by ?? of ??.

Part (2) is left as an exercise.


If ϕ is a sentence, the assignments s and h ◦ s are irrelevant, and we have
M  ϕ iff M0  ϕ.

Definition 21.10. An automorphism of a structure M is an isomorphism of M


onto itself.

21.5 The Theory of a Structure


Every structure M makes some sentences true, and some false. The set of all
the sentences it makes true is called its theory. That set is in fact a theory, since
anything it entails must be true in all its models, including M.

Definition 21.11. Given a structure M, the theory of M is the set Th(M) of


sentences that are true in M, i.e., Th(M) = { ϕ : M  ϕ}.

We also use the term “theory” informally to refer to sets of sentences hav-
ing an intended interpretation, whether deductively closed or not.

Proposition 21.12. For any M, Th(M) is complete.

Proof. For any sentence ϕ either M  ϕ or M  ¬ ϕ, so either ϕ ∈ Th(M) or


¬ ϕ ∈ Th(M).

Proposition 21.13. If N |= ϕ for every ϕ ∈ Th(M), then M ≡ N.

294 Release: (None) ((None))


21.6. PARTIAL ISOMORPHISMS

Proof. Since N  ϕ for all ϕ ∈ Th(M), Th(M) ⊆ Th(N). If N  ϕ, then


N 2 ¬ ϕ, so ¬ ϕ ∈
/ Th(M). Since Th(M) is complete, ϕ ∈ Th(M). So, Th(N) ⊆
Th(M), and we have M ≡ N.

Remark 2. Consider R = hR, <i, the structure whose domain is the set R of
the real numbers, in the language comprising only a 2-place predicate sym-
bol interpreted as the < relation over the reals. Clearly R is non-enumerable;
however, since Th(R) is obviously consistent, by the Löwenheim-Skolem the-
orem it has an enumerable model, say S, and by ??, R ≡ S. Moreover, since
R and S are not isomorphic, this shows that the converse of ?? fails in general.

21.6 Partial Isomorphisms


Definition 21.14. Given two structures M and N, a partial isomorphism from M
to N is a finite partial function p taking arguments in |M| and returning values
in |N|, which satisfies the isomorphism conditions from ?? on its domain:

1. p is injective;

2. for every constant symbol c: if p(cM ) is defined, then p(cM ) = cN ;

3. for every n-place predicate symbol P: if a1 , . . . , an are in the domain of


p, then h a1 , . . . , an i ∈ PM if and only if h p( a1 ), . . . , p( an )i ∈ PN ;

4. for every n-place function symbol f : if a1 , . . . , an are in the domain of p,


then p( f M ( a1 , . . . , an )) = f N ( p( a1 ), dots, p( an )).

That p is finite means that dom( p) is finite.

Notice that the empty function ∅ is always a partial isomorphism between


any two structures.

Definition 21.15. Two structures M and N, are partially isomorphic, written


M ' p N, if and only if there is a non-empty set I of partial isomorphisms
between M and N satisfying the back-and-forth property:

1. (Forth) For every p ∈ I and a ∈ |M| there is q ∈ I such that p ⊆ q and a


is in the domain of q;

2. (Back) For every p ∈ I and b ∈ |N| there is q ∈ I such that p ⊆ q and b is


in the range of q.

Theorem 21.16. If M ' p N and M and N are enumerable, then M ' N.

Proof. Since M and N are enumerable, let |M| = { a0 , a1 , . . .} and |N| = {b0 , b1 , . . .}.
Starting with an arbitrary p0 ∈ I, we define an increasing sequence of partial
isomorphisms p0 ⊆ p1 ⊆ p2 ⊆ · · · as follows:

Release: (None) ((None)) 295


CHAPTER 21. BASICS OF MODEL THEORY

1. if n + 1 is odd, say n = 2r, then using the Forth property find a pn+1 ∈ I
such that pn ⊆ pn+1 and ar is in the domain of pn+1 ;

2. if n + 1 is even, say n + 1 = 2r, then using the Back property find a


pn+1 ∈ I such that pn ⊆ pn+1 and br is in the range of pn+1 .

If we now put: [
p= pn ,
n ≥0

we have that p is a an isomorphism between M and N.

Theorem 21.17. Suppose M and N are structures for a purely relational language
(a language containing only predicate symbols, and no function symbols or con-
stants). Then if M ' p N, also M ≡ N.

Proof. By induction on formulas, one shows that if a1 , . . . , an and b1 , . . . , bn are


such that there is a partial isomorphism p mapping each ai to bi and s1 ( xi ) = ai
and s2 ( xi ) = bi (for i = 1, . . . , n), then M, s1  ϕ if and only if N, s2  ϕ. The
case for n = 0 gives M ≡ N.

Remark 3. If function symbols are present, the previous result is still true, but
one needs to consider the isomorphism induced by p between the substruc-
ture of M generated by a1 , . . . , an and the substructure of N generated by b1 ,
. . . , bn .
The previous result can be “broken down” into stages by establishing a
connection between the number of nested quantifiers in a formula and how
many times the relevant partial isomorphisms can be extended.

Definition 21.18. For any formula ϕ, the quantifier rank of ϕ, denoted by qr( ϕ) ∈
N, is recursively defined as the highest number of nested quantifiers in ϕ.
Two structures M and N are n-equivalent, written M ≡n N, if they agree on all
sentences of quantifier rank less than or equal to n.

Proposition 21.19. Let L be a finite purely relational language, i.e., a language


containing finitely many predicate symbols and constant symbols, and no function
symbols. Then for each n ∈ N there are only finitely many first-order sentences in
the language L that have quantifier rank no greater than n, up to logical equivalence.

Proof. By induction on n.

Definition 21.20. Given a structure M, let |M|<ω be the set of all finite se-
quences over |M|. We use a, b, c, . . . to range over finite sequences of elements.
If a ∈ |M|<ω and a ∈ |M|, then aa represents the concatenation of a with a.

Definition 21.21. Given structures M and N, we define relations In ⊆ |M|<ω ×


|N|<ω between sequences of equal length, by recursion on n as follows:

296 Release: (None) ((None))


21.7. DENSE LINEAR ORDERS

1. I0 (a, b) if and only if a and b satisfy the same atomic formulas in M and
N; i.e., if s1 ( xi ) = ai and s2 ( xi ) = bi and ϕ is atomic with all variables
among x1 , . . . , xn , then M, s1  ϕ if and only if N, s2  ϕ.
2. In+1 (a, b) if and only if for every a ∈ A there is a b ∈ B such that
In (aa, bb), and vice-versa.
Definition 21.22. Write M ≈n N if In (Λ, Λ) holds of M and N (where Λ is the
empty sequence).
Theorem 21.23. Let L be a purely relational language. Then In (a, b) implies that
for every ϕ such that qr( ϕ) ≤ n, we have M, a  ϕ if and only if N, b  ϕ (where
again a satisfies ϕ if any s such that s( xi ) = ai satisfies ϕ). Moreover, if L is finite,
the converse also holds.

Proof. The proof that In (a, b) implies that a and b satisfy the same formulas
of quantifier rank no greater than n is by an easy induction on ϕ. For the
converse we proceed by induction on n, using ??, which ensures that for each
n there are at most finitely many non-equivalent formulas of that quantifier
rank.
For n = 0 the hypothesis that a and b satisfy the same quantifier-free for-
mulas gives that they satisfy the same atomic ones, so that I0 (a, b).
For the n + 1 case, suppose that a and b satisfy the same formulas of quan-
tifier rank no greater than n + 1; in order to show that In+1 (a, b) suffices to
show that for each a ∈ |M| there is a b ∈ |N| such that In (aa, bb), and by the
inductive hypothesis again suffices to show that for each a ∈ |M| there is a
b ∈ |N| such that aa and bb satisfy the same formulas of quantifier rank no
greater than n.
Given a ∈ |M|, let τna be set of formulas ψ( x, y) of rank no greater than
n satisfied by aa in M; τna is finite, so we can assume it is a single first-order
formula. It follows that a satisfies ∃ x τna ( x, y), which has quantifier rank no
greater than n + 1. By hypothesis b satisfies the same formula in N, so that
there is a b ∈ |N| such that bb satisfies τna ; in particular, bb satisfies the same
formulas of quantifier rank no greater than n as aa. Similarly one shows that
for every b ∈ |N| there is a ∈ |M| such that aa and bb satisfy the same formu-
las of quantifier rank no greater than n, which completes the proof.

Corollary 21.24. If M and N are purely relational structures in a finite language,


then M ≈n N if and only if M ≡n N. In particular M ≡ N if and only if for each n,
M ≈n N .

21.7 Dense Linear Orders


Definition 21.25. A dense linear ordering without endpoints is a structure M for
the language containg a single 2-place predicate symbol < satisfying the fol-
lowing sentences:

Release: (None) ((None)) 297


CHAPTER 21. BASICS OF MODEL THEORY

1. ∀ x x < x;

2. ∀ x ∀y ∀z ( x < y → (y < z → x < z));

3. ∀ x ∀y ( x < y ∨ x = y ∨ y < x );

4. ∀ x ∃y x < y;

5. ∀ x ∃y y < x;

6. ∀ x ∀y ( x < y → ∃z ( x < z ∧ z < y)).

Theorem 21.26. Any two enumerable dense linear orderings without endpoints are
isomorphic.

Proof. Let M1 and M2 be enumerable dense linear orderings without end-


points, with <1 = <M1 and <2 = <M2 , and let I be the set of all partial
isomorphisms between them. I is not empty since at least ∅ ∈ I . We show
that I satisfies the Back-and-Forth property. Then M1 ' p M2 , and the theo-
rem follows by ??.
To show I satisifes the Forth property, let p ∈ I and let p( ai ) = bi for i = 1,
. . . , n, and without loss of generality suppose a1 <1 a2 <1 · · · <1 an . Given
a ∈ |M1 |, find b ∈ |M2 | as follows:

1. if a <2 a1 let b ∈ |M2 | be such that b <2 b1 ;

2. if an <1 a let b ∈ |M2 | be such that bn <2 b;

3. if ai <1 a <1 ai+1 for some i, then let b ∈ |M2 | be such that bi <2 b <2
bi + 1 .

It is always possible to find a b with the desired property since M2 is a dense


linear ordering without endpoints. Define q = p ∪ {h a, bi} so that q ∈ I is the
desired extension of p. This establishes the Forth property. The Back property
is similar. So M1 ' p M2 ; by ??, M1 ' M2 .

Remark 4. Let S be any enumerable dense linear ordering without endpoints.


Then (by ??) S ' Q, where Q = (Q, <) is the enumerable dense linear or-
dering having the set Q of the rational numbers as its domain. Now consider
again the structure R = (R, <) from ??. We saw that there is an enumerable
structure S such that R ≡ S. But S is an enumerable dense linear ordering
without endpoints, and so it is isomorphic (and hence elementarily equiva-
lent) to the structure Q. By transitivity of elementary equivalence, R ≡ Q.
(We could have shown this directly by establishing R ' p Q by the same back-
and-forth argument.)

298 Release: (None) ((None))


21.7. DENSE LINEAR ORDERS

Problems
Problem 21.1. Prove ??.

Problem 21.2. Carry out the proof of (b) of ?? in detail. Make sure to note
where each of the five properties characterizing isomorphisms of ?? is used.

Problem 21.3. Show that for any structure M, if X is a definable subset of M,


and h is an automorphism of M, then X = {h( x ) : x ∈ X } (i.e., X is fixed
under h).

Problem 21.4. Show in detail that p as defined in ?? is in fact an isomorphism.

Problem 21.5. Complete the proof of ?? by verifying that I satisfies the Back
property.

Release: (None) ((None)) 299


Chapter 22

Models of Arithmetic

22.1 Introduction
The standard model of aritmetic is the structure N with |N| = N in which ,
0, +, ×, and < are interpreted as you would expect. That is,  is 0, 0 is the
successor function, + is interpeted as addition and × as multiplication of the
numbers in N. Specifically,

N = 0
0N ( n ) = n + 1
+N (n, m) = n + m
×N (n, m) = nm

Of course, there are structures for L A that have domains other than N. For
instance, we can take M with domain |M| = { a}∗ (the finite sequences of the
single symbol a, i.e., ∅, a, aa, aaa, . . . ), and interpretations

M = ∅
0M ( s ) = s _ a
+M (n, m) = an+m
×M (n, m) = anm

These two structures are “essentially the same” in the sense that the only dif-
ference is the elements of the domains but not how the elements of the do-
mains are related among each other by the interpretation functions. We say
that the two structures are isomorphic.
It is an easy consequence of the compactness theorem that any theory true
in N also has models that are not isomorphic to N. Such structures are called
non-standard. The interesting thing about them is that while the elements of a
standard model (i.e., N, but also all structures isomorphic to it) are exhausted

300
22.2. STANDARD MODELS OF ARITHMETIC

by the values of the standard numerals n, i.e.,

|N| = {ValN (n) : n ∈ N}

that isn’t the case in non-standard models: if M is non-standard, then there is


at least one x ∈ |M| such that x 6= ValM (n) for all n.
These non-standard elements are pretty neat: they are “infinite natural
numbers.” But their existence also explains, in a sense, the incompleteness
phenomena. Cconsider an example, e.g., the consistency statement for Peano
arithmetic, ConPA , i.e., ¬∃ x Prf PA ( x, p⊥q). Since PA neither proves ConPA nor
¬ConPA , either can be consistently added to PA. Since PA is consistent, N 
ConPA , and consequently N 2 ¬ConPA . So N is not a model of PA ∪ {¬ConPA },
and all its models must be nonstandard. Models of PA ∪ {¬ConPA } must
contain some element that serves as the witness that makes ∃ x Prf PA (p⊥q)
true, i.e., a Gödel number of a derivation of a contradiction from PA. Such
an element can’t be standard—since PA ` ¬Prf PA (n, p⊥q) for every n.

22.2 Standard Models of Arithmetic


The language of arithmetic L A is obviously intended to be about numbers,
specifically, about natural numbers. So, “the” standard model N is special: it
is the model we want to talk about. But in logic, we are often just interested in
structural properties, and any two structures taht are isomorphic share those.
So we can be a bit more liberal, and consider any structure that is isomorphic
to N “standard.”

Definition 22.1. A structure for L A is standard if it is isomorphic to N.

Proposition 22.2. If a structure M standard, its domain is the set of values of the
standard numerals, i.e.,

|M| = {ValM (n) : n ∈ N}

Proof. Clearly, every ValM (n) ∈ |M|. We just have to show that every x ∈
|M| is equal to ValM (n) for some n. Since M is standard, it is isomorphic
to N. Suppose g : N → |M| is an isomorphism. Then g(n) = g(ValN (n)) =
ValM (n). But for every x ∈ |M|, there is an n ∈ N such that g(n) = x, since g
is surjective.

If a structure M for L A is standard, the elements of its domain can all be


named by the standard numerals 0, 1, 2, . . . , i.e., the terms , 0 , 00 , etc. Of
course, this does not mean that the elements of |M| are the numbers, just that
we can pick them out the same way we can pick out the numbers in |N|.

Proposition 22.3. If M  Q, and |M| = {ValM (n) : n ∈ N}, then M is standard.

Release: (None) ((None)) 301


CHAPTER 22. MODELS OF ARITHMETIC

Proof. We have to show that M is isomorphic to N. Consider the function


g : N → |M| defined by g(n) = ValM (n). By the hypothesis, g is surjective.
It is also injective: Q ` n 6= m whenever n 6= m. Thus, since M  Q, M 
n 6= m, whenever n 6= m. Thus, if n 6= m, then ValM (n) 6= ValM (m), i.e.,
g ( n ) 6 = g ( m ).
We also have to verify that g is an isomorphism.

1. We have g(N ) = g(0) since, N = 0. By definition of g, g(0) =


ValM (0). But 0 is just , and the value of a term which happens to be
a constant symbol is given by what the structure assigns to that constant
symbol, i.e., ValM () = M . So we have g(N ) = M as required.

2. g(0N (n)) = g(n + 1), since 0 in N is the successor function on N. Then,


g(n + 1) = ValM (n + 1) by definition of g. But n + 1 is the same term
as n0 , so ValM (n + 1) = ValM (n0 ). By the definition of the value func-
tion, this is = 0M (ValM (n)). Since ValM (n) = g(n) we get g(0N (n)) =
0M ( g(n)).
3. g(+N (n, m)) = g(n + m), since + in N is the addition function on N.
Then, g(n + m) = ValM (n + m) by definition of g. But Q ` n + m =
(n + m), so ValM (n + m) = ValM (n + m). By the definition of the value
function, this is = +M (ValM (n), ValM (m)). Since ValM (n) = g(n) and
ValM (m) = g(m), we get g(+N (n, m)) = +M ( g(n), g(m)).

4. g(×N (n, m)) = ×M ( g(n), g(m)): Exercise.

5. hn, mi ∈ <N iff n < m. If n < m, then Q ` n < m, and also M  n < m.
Thus hValM (n), ValM (m)i ∈ <M , i.e., h g(n), g(m)i ∈ <M . If n 6< m,
then Q ` ¬n < m, and consequently M 2 n < m. Thus, as before,
/ <M . Together, we get: hn, mi ∈ <N iff h g(n), g(m)i ∈
h g(n), g(m)i ∈
<M .

The function g is the most obvious way of defining a mapping from N


to the domain of any other structure M for L A , since every such M contains
elements named by 0, 1, 2, etc. So it isn’t surprising that if M makes at least
some basic statements about the n’s true in the same way that N does, and g
is also bijective, then g will turn into an isomorphism. In fact, if |M| contains
no elements other than what the n’s name, it’s the only one.

Proposition 22.4. If M is standard, then g from the proof of ?? is the only isomor-
phism from N to M.

Proof. Suppose h : N → |M| is an isomorphism between N and M. We show


that g = h by induction on n. If n = 0, then g(0) = M by definition of g. But
since h is an isomorphism, h(0) = h(N ) = M , so g(0) = h(0).

302 Release: (None) ((None))


22.3. NON-STANDARD MODELS

Now consider the case for n + 1. We have

g(n + 1) = ValM (n + 1) by definition of g


= ValM (n0 )
= 0M (ValM (n))
= 0M ( g(n)) by definition of g
= 0M (h(n)) by induction hypothesis
= h(0N (n)) since h is an isomorphism
= h ( n + 1)

For any denumerable set X, there’s a bijection between N and X, so every


such set X is potentially the domain of a standard model. In fact, once you
pick an object z ∈ X and a suitable function s : X → X as X and 0X , the
interpretation of +, ×, and < is already fixed. Only functions s = 0X that
are both injective and surjective are suitable in a standard model. It has to be
injective since the successor function in N is, and that 0 is injective is expressed
by a sentence true in N which X thus also has to make true. It has to be
surjective because otherwise there would be some x ∈ X not in the domain
of s, i.e., the sentence ∀ x ∃y y0 = x would be false—but it is true in N.

22.3 Non-Standard Models


We call a structure for L A standard if it is isomorphic to N. If a structure isn’t
isomorphic to N, it is called non-standard.

Definition 22.5. A structure M for L A is non-standard if it is not isomorphic


to N. The elements x ∈ |M| which are equal to ValM (n) for some n ∈ N are
called standard numbers (of M), and those not, non-standard numbers.

By ??, any standard structure for L A contains only standard elements.


Consequently, a non-standard structure must contain at least one non-standard
element. In fact, the existence of a non-standard element guarantees that the
structure is non-standard.

Proposition 22.6. If a structure M for L A contains a non-standard number, M is


non-standard.

Proof. Suppose not, i.e., suppose M standard but contains a non-standard


number x. Let g : N → |M| be an isomorphism. It is easy to see (by induction
on n) that g(ValN (n)) = ValM (n). In other words, g maps standard num-
bers of N to standard numbers of M. If M contains a non-standard number, g
cannot be surjective, contrary to hypothesis.

Release: (None) ((None)) 303


CHAPTER 22. MODELS OF ARITHMETIC

It is easy enough to specify non-standard structures for L A . For instance,


take the structure with domain Z and interpret all non-logical symbols as
usual. Since negative numbers are not values of n for any n, this structure
is non-standard. Of course, it will not be a model of arithmetic in the sense
that it makes the same sentences true as N. For instance, ∀ x x 0 6=  is false.
However, we can prove that non-standard models of arithmetic exist easily
enough, using the compactness theorem.

Proposition 22.7. Let TA = { ϕ : N  ϕ} be the theory of N. TA has an enumerable


non-standard model.

Proof. Expand L A by a new constant symbol c and consider the set of sen-
tences
Γ = TA ∪ {c 6= 0, c 6= 1, c 6= 2, . . . }

Any model Mc of Γ would contain an element x = cM which is non-standard,


since x 6= ValM (n) for all n ∈ N. Also, obviously, Mc  TA, since TA ⊆ Γ. If
we turn Mc into a structure M for L A simply by forgetting about c, its domain
still contains the non-standard x, and also M  TA. The latter is guaranteed
since c does not occur in TA. So, it suffices to show that Γ has a model.
We use the compactness theorem to show that Γ has a model. If every
finite subset of Γ is satisfiable, so is Γ. Consider any finite subset Γ0 ⊆ Γ. Γ0
includes some sentences of TA and some of the form c 6= n, but only finitely
many. Suppose k is the largest number so that c 6= k ∈ Γ0 . Define Nk by
expanding N to include the interpretation cNk = k + 1. Nk  Γ0 : if ϕ ∈ TA,
Nk  ϕ since Nk is just like N in all respects except c, and c does not occur in ϕ.
And Nk  c 6= n, since n ≤ k, and ValNk (c) = k + 1. Thus, every finite subset
of Γ is satisfiable.

22.4 Models of Q

We know that there are non-standard structures that make the same sentences
true as N does, i.e., is a model of TA. Since N  Q, any model of TA is also
a model of Q. Q is much weaker than TA, e.g., Q 0 ∀ x ∀y ( x + y) = (y + x ).
Weaker theories are easier to satisfy: they have more models. E.g., Q has
models which make ∀ x ∀y ( x + y) = (y + x ) false, but those cannot also be
models of TA, or PA for that matter. Models of Q are also relatively simple:
we can specify them explicitly.

304 Release: (None) ((None))


22.4. MODELS OF Q

Example 22.8. Consider the structure K with domain |K| = N ∪ { a} and in-
terpretations

K = 0
(
x+1 if x ∈ N
0K ( x ) =
a if x = a
(
x+y if x, y ∈ N
+K ( x, y) =
a otherwise
(
xy if x, y ∈ N
×K ( x, y) =
a otherwise
< = {h x, yi : x, y ∈ N and x < y} ∪ {h x, ai : x ∈ |K|}
K

To show that K  Q we have to verify that all axioms of Q are true in K.


For convenience, let’s write x ∗ for 0K ( x ) (the “successor” of x in K), x ⊕ y for
+K ( x, y) (the “sum” of x and y in K, x ⊗ y for ×K ( x, y) (the “product” of x
and y in K), and x 4 y for h x, yi ∈ <K . With these abbreviations, we can give
the operations in K more perspicuously as

x x∗ x⊕y m a x⊗y m a
n n+1 n n+m a n nm a
a a a a a a a a

We have n 4 m iff n < m for n, m ∈ N and x 4 a for all x ∈ |K|.


K  ∀ x ∀y ( x 0 = y0 → x = y) since ∗ is injective. K  ∀ x  6= x 0 since 0 is
not a ∗-successor in K. N  ∀ x ( x 6=  → ∃y x = y0 ) since for every n > 0,
n = (n − 1)∗ , and a = a∗ .
K  ∀ x ( x + ) = x since n ⊕ 0 = n + 0 = n, and a ⊕ 0 = a by definition
of ⊕. K  ∀ x ∀y ( x + y0 ) = ( x + y)0 is a bit trickier. If n, m are both standard,
we have:

(n ⊕ m∗ ) = (n + (m + 1)) = (n + m) + 1 = (n ⊕ m)∗

since ⊕ and ∗ agree with + and 0 on standard numbers. Now suppose x ∈ |K|.
Then

( x ⊕ a∗ ) = ( x ⊕ a) = a = a∗ = ( x ⊕ a)∗

The remaining case is if y ∈ |K| but x = a. Here we also have to distinguish


cases according to whether y = n is standard or y = b:

( a ⊕ n∗ ) = ( a ⊕ (n + 1)) = a = a∗ = ( x ⊕ n)∗
( a ⊕ a∗ ) = ( a ⊕ a) = a = a∗ = ( x ⊕ a)∗

Release: (None) ((None)) 305


CHAPTER 22. MODELS OF ARITHMETIC

This is of course a bit more detailed than needed. For instance, since a ⊕ z = a
whatever z is, we can immediately conclude a ⊕ a∗ = a. The remaining axioms
can be verified the same way.
K is thus a model of Q. Its “addition” ⊕ is also commutative. But there are
other sentences true in N but false in K, and vice versa. For instance, a 4 a, so
K  ∃ x x < x and K 2 ∀ x ¬ x < x. This shows that Q 0 ∀ x ¬ x < x.
Example 22.9. Consider the structure L with domain |L| = N ∪ { a, b} and
interpretations 0L = ∗, +L = ⊕ given by

x x∗ x⊕y m a b
n n+1 n n+m b a
a a a a b a
b b b b b a

Since ∗ is injective, 0 is not in its range, and every x ∈ |L| other than 0 is,
axioms Q1 –Q3 are true in L. For any x, x ⊕ 0 = x, so Q4 is true as well. For
Q5 , consider x ⊕ y∗ and ( x ⊕ y)∗ . They are equal if x and y are both standard,
since then ∗ and ⊕ agree with 0 and +. If x is non-standard, and y is standard,
we have x ⊕ y∗ = x = x ∗ = ( x ⊕ y)∗ . If x and y are both non-standard, we
have four cases:

a ⊕ a∗ = b = b∗ = ( a ⊕ a)∗
b ⊕ b∗ = a = a∗ = (b ⊕ b)∗
b ⊕ a∗ = b = b∗ = (b ⊕ y)∗
a ⊕ b∗ = a = a∗ = ( a ⊕ b)∗

If x is standard, but y is non-standard, we have

n ⊕ a∗ = n ⊕ a = b = b∗ = (n ⊕ a)∗
n ⊕ b∗ = n ⊕ b = a = a∗ = (n ⊕ b)∗

So, L  Q5 . However, a ⊕ 0 6= 0 ⊕ a, so L 2 ∀ x ∀y ( x + y) = (y + x ).
We’ve explicitly constructed models of Q in which the non-standard ele-
ments live “beyond” the standard elements. In fact, that much is required by
the axioms. A non-standard element x cannot be 40. Otherwise, for some z,
x ⊕ z∗ = 0 by Q8. But then 0 = x ⊕ z∗ = ( x ⊕ z)∗ by Q5 , contradicting Q2 .
Also, for every n, Q ` ∀ x ( x < n0 → ( x = 0 ∨ x = 1 ∨ · · · ∨ x = n)), so we
can’t have a 4 n for any n > 0.

22.5 Computable Models of Arithmetic


The standard model N has two nice features. Its domain is the natural num-
bers N, i.e., its elements are just the kinds of things we want to talk about

306 Release: (None) ((None))


22.5. COMPUTABLE MODELS OF ARITHMETIC

using the language of arithmetic, and the standard numeral n actually picks
out n. The other nice feature is that the interpretations of the non-logical sym-
bols of L A are all computable. The successor, addition, and multiplication func-
tions which serve as 0N , +N , and ×N are computable functions of numbers.
(Computable by Turing machines, or definable by primitive recursion, say.)
And the less-than relation on N, i.e., <N , is decidable.
Non-standard models of arithmetical theories such as Q and PA must con-
tain non-standard elements. Thus their domains typically include elements in
addition to N. However, any countable structure can be built on any denu-
merable set, including N. So there are also non-standard models with do-
main N. In such models M, of course, at least some numbers cannot play
the roles they usually play, since some k must be different from ValM (n) for
all n ∈ N.

Definition 22.10. A structure M for L A is computable iff |M| = N and 0M ,


+M , ×M are computable functions and <M is a decidable relation.

Example 22.11. Recall the structure K from ?? Its domain was |K| = N ∪ { a}
and interpretations

K = 0
(
x+1 if x ∈ N
0K ( x ) =
a if x = a
(
x+y if x, y ∈ N
+K ( x, y) =
a otherwise
(
xy if x, y ∈ N
×K ( x, y) =
a otherwise
<K = {h x, yi : x, y ∈ N and x < y} ∪ {h x, ai : n ∈ |K|}

But |K| is denumerable and so is equinumerous with N. For instance, g : N →


|K| with g(0) = a and g(n) = n + 1 for n > 0 is a bijection. We can turn it
into an isomorphism between a new model K0 of Q and K. In K0 , we have to
assign different functions and relations to the symbols of L A , since different
elements of N play the roles of standard and non-standard numbers.
Specifically, 0 now plays the role of a, not of the smallest standard number.
0
The smallest standard number is now 1. So we assign K = 1. The successor
function is also different now: given a standard number, i.e., an n > 0, it still
returns n + 1. But 0 now plays the role of a, which is its own successor. So

Release: (None) ((None)) 307


CHAPTER 22. MODELS OF ARITHMETIC

0
0K (0) = 0. For addition and multiplication we likewise have
(
K0 x + y if x, y > 0
+ ( x, y) =
0 otherwise
(
0 xy if x, y > 0
×K ( x, y) =
0 otherwise
0
And we have h x, yi ∈ <K iff x < y and x > 0 and y > 0, or if y = 0.
All of these functions are computable functions of natural numbers and
0
<K is a decidable relation on N—but they are not the same functions as suc-
0
cessor, addition, and multiplication on N, and <K is not the same relation
as < on N.
This example shows that Q has computable non-standard models with do-
main N. However, the following result shows that this is not true for models
of PA (and thus also for models of TA).
Theorem 22.12 (Tennenbaum’s Theorem). N is the only computable model of PA.

Problems
Problem 22.1. Show that the converse of ?? is false, i.e., give an example of
a structure M with |M| = {ValM (n) : n ∈ N} that is not isomorphic to N.
Problem 22.2. Recall that Q contains the axioms

∀ x ∀y ( x 0 = y0 → x = y) (Q1 )
∀ x  6= x0 (Q2 )
∀ x ( x 6=  → ∃y x = y0 ) (Q3 )

Give structures M1 , M2 , M3 such that


1. M1  Q1 , M1  Q2 , M1 2 Q3 ;
2. M2  Q1 , M2 2 Q2 , M2  Q3 ; and
3. M3 2 Q1 , M3  Q2 , M3  Q3 ;
Obviously, you just have to specify Mi and 0Mi for each.
Problem 22.3. Prove that K from ?? satisifies the remaining axioms of Q,

∀ x ( x × ) =  (Q6 )
∀ x ∀y ( x × y0 ) = (( x × y) + x ) (Q7 )
∀ x ∀y ( x < y ↔ ∃z ( x + z0 = y)) (Q8 )

Find a sentence only involving 0 true in N but false in K.

308 Release: (None) ((None))


22.5. COMPUTABLE MODELS OF ARITHMETIC

Problem 22.4. Expand L of ?? to include ⊗ and 4 that interpret × and <.


Show that your structure satisifies the remaining axioms of Q,

∀ x ( x × ) =  (Q6 )
∀ x ∀y ( x × y0 ) = (( x × y) + x ) (Q7 )
∀ x ∀y ( x < y ↔ ∃z ( x + z0 = y)) (Q8 )

Problem 22.5. In L of ??, a∗ = a and b∗ = b. Is there a model of Q in which


a∗ = b and b∗ = a?

Problem 22.6. Give a structure L0 with |L0 | = N isomorphic to L of ??.

Release: (None) ((None)) 309


Chapter 23

The Interpolation Theorem

23.1 Introduction
The interpolation theorem is the following result: Suppose  ϕ → ψ. Then
there is a sentence χ such that  ϕ → χ and  χ → ψ. Moreover, every constant
symbol, function symbol, and predicate symbol (other than =) in χ occurs
both in ϕ and ψ. The sentence χ is called an interpolant of ϕ and ψ.
The interpolation theorem is interesting in its own right, but its main im-
portance lies in the fact that it can be used to prove results about definability in
a theory, and the conditions under which combining two consistent theories
results in a consistent theory. The first result is known as the Beth definability
theorem; the second, Robinson’s joint consistency theorem.

23.2 Separation of Sentences


A bit of groundwork is needed before we can proceed with the proof of the
interpolation theorem. An interpolant for ϕ and ψ is a sentence χ such that
ϕ  χ and χ  ψ. By contraposition, the latter is true iff ¬ψ  ¬χ. A sentence χ
with this property is said to separate ϕ and ¬ψ. So finding an interpolant for ϕ
and ψ amounts to finding a sentence that separates ϕ and ¬ψ. As so often, it
will be useful to consider a generalization: a sentence that separates two sets
of sentences.

Definition 23.1. A sentence χ separates sets of sentences Γ and ∆ if and only if


Γ  χ and ∆  ¬χ. If no such sentence exists, then Γ and ∆ are inseparable.

The inclusion relations between the classes of models of Γ, ∆ and χ are


represented below:

Lemma 23.2. Suppose L0 is the language containing every constant symbol, func-
.
tion symbol and predicate symbol (other than =) that occurs in both Γ and ∆, and let

310
23.2. SEPARATION OF SENTENCES

Γ ∆

¬χ

Figure 23.1: χ separates Γ and ∆

L00 be obtained by the addition of infinitely many new constant symbols cn for n ≥ 0.
Then if Γ and ∆ are inseparable in L0 , they are also inseparable in L00 .

Proof. We proceed indirectly: suppose by way of contradiction that Γ and ∆


are separated in L00 . Then Γ  χ[c/x ] and ∆  ¬χ[c/x ] for some χ ∈ L0
(where c is a new constant symbol—the case where χ contains more than one
such new constant symbol is similar). By compactness, there are finite subsets
Γ0 of Γ and ∆ 0 of ∆ such that Γ0  χ[c/x ] and ∆ 0  ¬χ[c/x ]. Let γ be the
conjunction of all formulas in Γ0 and δ the conjunction of all formulas in ∆ 0 .
Then

γ  χ[c/x ], δ  ¬χ[c/x ].

From the former, by Generalization, we have γ  ∀ x χ, and from the latter


by contraposition, χ[c/x ]  ¬δ, whence also ∀ x χ  ¬δ. Contraposition again
gives δ  ¬∀ x χ. By monotony,

Γ  ∀ x χ, ∆  ¬∀ x χ,

so that ∀ x χ separates Γ and ∆ in L0 .

Lemma 23.3. Suppose that Γ ∪ {∃ x σ } and ∆ are inseparable, and c is a new con-
stant symbol not in Γ, ∆, or σ. Then Γ ∪ {∃ x σ, σ [c/x ]} and ∆ are also inseparable.

Proof. Suppose for contradiction that χ separates Γ ∪ {∃ x σ, σ[c/x ]} and ∆,


while at the same time Γ ∪ {∃ xσ} and ∆ are inseparable. We distinguish two
cases:

1. c does not occur in χ: in this case Γ ∪ {∃ x σ, ¬χ} is satisfiable (otherwise


χ separates Γ ∪ {∃ x σ} and ∆). It remains so if σ [c/x ] is added, so χ does
not separate Γ ∪ {∃ x σ, σ [c/x ]} and ∆ after all.

2. c does occur in χ so that χ has the form χ[c/x ]. Then we have that

Γ ∪ {∃ x σ, σ [c/x ]}  χ[c/x ],

Release: (None) ((None)) 311


CHAPTER 23. THE INTERPOLATION THEOREM

whence Γ, ∃ x σ  ∀ x (σ → χ) by the Deduction Theorem and General-


ization, and finally Γ ∪ {∃ x σ }  ∃ x χ. On the other hand, ∆  ¬χ[c/x ]
and hence by Generalization ∆  ¬∃ x χ. So Γ ∪ {∃ x σ } and ∆ are sepa-
rable, a contradiction.

23.3 Craig’s Interpolation Theorem


Theorem 23.4 (Craig’s Interpolation Theorem). If  ϕ → ψ, then there is a sen-
tence χ such that  ϕ → χ and  χ → ψ, and every constant symbol, function symbol,
and predicate symbol (other than =) in χ occurs both in ϕ and ψ. The sentence χ is
called an interpolant of ϕ and ψ.

Proof. Suppose L1 is the language of ϕ and L2 is the language of ψ. Let L0 =


L1 ∩ L2 . For each i ∈ {0, 1, 2}, let Li0 be obtained from Li by adding the
infinitely many new constant symbols c0 , c1 , c2 , . . . .
If ϕ is unsatisfiable, ∃ x x 6= x is an interpolant. If ¬ψ is unsatisfiable (and
hence ψ is valid), ∃ x x = x is an interpolant. So we may assume also that both
ϕ and ¬ψ are satisfiable.
In order to prove the contrapositive of the Interpolation Theorem, assume
that there is no interpolant for ϕ and ψ. In other words, assume that { ϕ} and
{¬ψ} are inseparable in L0 .
Our goal is to extend the pair ({ ϕ}, {¬ψ}) to a maximally inseparable pair
( Γ , ∆∗ ). Let ϕ0 , ϕ1 , ϕ2 , . . . enumerate the sentences of L1 , and ψ0 , ψ1 , ψ2 ,

. . . enumerate the sentences of L2 . We define two increasing sequences of sets


of sentences ( Γn , ∆ n ), for n ≥ 0, as follows. Put Γ0 = { ϕ} and ∆ 0 = {¬ψ}.
Assuming ( Γn , ∆ n ) are already defined, define Γn+1 and ∆ n+1 by:

1. If Γn ∪ { ϕn } and ∆ n are inseparable in L00 , put ϕn in Γn+1 . Moreover, if


ϕn is an existential formula ∃ x σ then pick a new constant symbol c not
occurring in Γn , ∆ n , ϕn or ψn , and put σ [c/x ] in Γn+1 .

2. If Γn+1 and ∆ n ∪ {ψn } are inseparable in L00 , put ψn in ∆ n+1 . Moreover,


if ψn is an existential formula ∃ x σ, then pick a new constant symbol c
not occurring in Γn+1 , ∆ n , ϕn or ψn , and put σ [c/x ] in ∆ n+1 .

Finally, define:

Γ∗ = ∆∗ =
[ [
Γn , ∆n.
n ≥0 n ≥0

By simultaneous induction on n we can now prove:

1. Γn and ∆ n are inseparable in L00 ;

2. Γn+1 and ∆ n are inseparable in L00 .

The basis for ?? is given by ??. For part ??, we need to distinguish three cases:

312 Release: (None) ((None))


23.3. CRAIG’S INTERPOLATION THEOREM

1. If Γ0 ∪ { ϕ0 } and ∆ 0 are separable, then Γ1 = Γ0 and ?? is just ??;

2. If Γ1 = Γ0 ∪ { ϕ0 }, then Γ1 and ∆ 0 are inseparable by construction.

3. It remains to consider the case where ϕ0 is existential, so that Γ1 = Γ0 ∪


{∃ x σ, σ[c/x ]}. By construction, Γ0 ∪ {∃ x σ} and ∆ 0 are inseparable, so
that by ?? also Γ0 ∪ {∃ x σ, σ [c/x ]} and ∆ 0 are inseparable.

This completes the basis of the induction for ?? and ?? above. Now for the in-
ductive step. For ??, if ∆ n+1 = ∆ n ∪ {ψn } then Γn+1 and ∆ n+1 are inseparable
by construction (even when ψn is existential, by ??); if ∆ n+1 = ∆ n (because
Γn+1 and ∆ n ∪ {ψn } are separable), then we use the induction hypothesis on
??. For the inductive step for ??, if Γn+2 = Γn+1 ∪ { ϕn+1 } then Γn+2 and ∆ n+1
are inseparable by construction (even when ϕn+1 is existential, by ??); and if
Γn+2 = Γn+1 then we use the inductive case for ?? just proved. This concludes
the induction on ?? and ??.
It follows that Γ ∗ and ∆∗ are inseparable; if not, by compactness, there
is n ≥ 0 that separates Γn and ∆ n , against ??. In particular, Γ ∗ and ∆∗ are
consistent: for if the former or the latter is inconsistent, then they are separated
by ∃ x x 6= x or ∀ x x = x, respectively.
We now show that Γ ∗ is maximally consistent in L10 and likewise ∆∗ in
L20 . For the former, suppose that ϕn ∈ / Γ ∗ and ¬ ϕn ∈/ Γ ∗ , for some n ≥ 0. If
ϕn ∈ / Γ then Γn ∪ { ϕn } is separable from ∆ n , and so there is χ ∈ L00 such that

both:

Γ ∗  ϕn → χ, ∆∗  ¬χ.

/ Γ ∗ , there is χ0 ∈ L00 such that both:


Likewise, if ¬ ϕn ∈

Γ ∗  ¬ ϕn → χ0 , ∆∗  ¬χ0 .

By propositional logic, Γ ∗  χ ∨ χ0 and ∆∗  ¬(χ ∨ χ0 ), so χ ∨ χ0 separates


Γ ∗ and ∆∗ . A similar argument establishes that ∆∗ is maximal.
Finally, we show that Γ ∗ ∩ ∆∗ is maximally consistent in L00 . It is obviously
consistent, since it is the intersection of consistent sets. To show maximality,
let σ ∈ L00 . Now, Γ ∗ is maximal in L10 ⊇ L00 , and similarly ∆∗ is maximal in
L20 ⊇ L00 . It follows that either σ ∈ Γ ∗ or ¬σ ∈ Γ ∗ , and either σ ∈ ∆∗ or
¬σ ∈ ∆∗ . If σ ∈ Γ ∗ and ¬σ ∈ ∆∗ then σ would separate Γ ∗ and ∆∗ ; and if
¬σ ∈ Γ ∗ and σ ∈ ∆∗ then Γ ∗ and ∆∗ would be separated by ¬σ. Hence, either
σ ∈ Γ ∗ ∩ ∆∗ or ¬σ ∈ Γ ∗ ∩ ∆∗ , and Γ ∗ ∩ ∆∗ is maximal.
Since Γ ∗ is maximally consistent, it has a model M10 whose domain M10
0
comprises all and only the elements cM1 interpreting the constant symbols—
just like in the proof of the completeness theorem (??). Similarly, ∆∗ has a
0
model M20 whose domain |M20 | is given by the interpretations cM2 of the con-
stant symbols.

Release: (None) ((None)) 313


CHAPTER 23. THE INTERPOLATION THEOREM

Let M1 be obtained from M10 by dropping interpretations for constant sym-


bols, function symbols, and predicate symbols in L10 \ L00 , and similarly for
0 0
M2 . Then the map h : M1 → M2 defined by h(cM1 ) = cM2 is an isomor-
phism in L00 , because Γ ∗ ∩ ∆∗ is maximally consistent in L00 , as shown. This
follows because any L00 -sentence either belongs to both Γ ∗ and ∆∗ , or to nei-
0 0
ther: so cM1 ∈ PM1 if and only if P(c) ∈ Γ ∗ if and only if P(c) ∈ ∆∗ if and
0 0
only if cM2 ∈ PM2 . The other conditions satisfied by isomorphisms can be
established similarly.
Let us now define a model M for the language L1 ∪ L2 as follows:
0
1. The domain |M| is just |M2 |, i.e., the set of all elements cM2 ;
0
2. If a predicate symbol P is in L2 \ L1 then PM = PM2 ;
0 M0 M0
3. If a predicate P is in L1 \ L2 then PM = h( PM2 ), i.e., hc1 2 , . . . , cn 2 i ∈
M0 M0 0
PM if and only if hc1 1 , . . . , cn 1 i ∈ PM1 .
0 0
4. If a predicate symbol P is in L0 then PM = PM2 = h( PM1 ).

5. Function symbols of L1 ∪ L2 , including constant symbols, are handled


similarly.

Finally, one shows by induction on formulas that M agrees with M10 on all
formulas of L10 and with M20 on all formulas of L20 . In particular, M  Γ ∗ ∪ ∆∗ ,
whence M  ϕ and M  ¬ψ, and 6 ϕ → ψ. This concludes the proof of Craig’s
Interpolation Theorem.

23.4 The Definability Theorem


One important application of the interpolation theorem is Beth’s definability
theorem. To define an n-place relation R we can give a formula χ with n free
variables which does not involve R. This would be an explicit definition of R in
terms of χ. We can then say also that a theory Σ( P) in a language containing
the n-place predicate symbol P explicitly defines P if it contains (or at least
entails) a formalized explicit definition, i.e.,

Σ( P)  ∀ x1 . . . ∀ xn ( P( x1 , . . . , xn ) ↔ χ( x1 , . . . , xn )).

But an explicit definition is only one way of defining—in the sense of deter-
mining completely—a relation. A theory may also be such that the interpreta-
tion of P is fixed by the interpretation of the rest of the language in any model.
The definability theorem states that whenever a theory fixes the interpreta-
tion of P in this way—whenever it implicitly defines P—then it also explicitly
defines it.

314 Release: (None) ((None))


23.4. THE DEFINABILITY THEOREM

Definition 23.5. Suppose L is a language not containing the predicate sym-


bol P. A set Σ( P) of sentences of L ∪ { P} explicitly defines P if and only if there
is a formula χ( x1 , . . . , xn ) of L such that

Σ( P)  ∀ x1 . . . ∀ xn ( P( x1 , . . . , xn ) ↔ χ( x1 , . . . , xn )).

Definition 23.6. Suppose L is a language not containing the predicate sym-


bols P and P0 . A set Σ( P) of sentences of L ∪ { P} implicitly defines P if and
only if

Σ( P) ∪ Σ( P0 )  ∀ x1 . . . ∀ xn ( P( x1 , . . . , xn ) ↔ P0 ( x1 , . . . , xn )),

where Σ( P0 ) is the result of uniformly replacing P with P0 in Σ( P).

In other words, for any model M and R, R0 ⊆ |M|n , if both (M, R)  Σ( P)


and (M, R0 )  Σ( P0 ), then R = R0 ; where (M, R) is the structure M0 for the
0
expansion of L to L ∪ { P} such that PM = R, and similarly for (M, R0 ).

Theorem 23.7 (Beth Definability Theorem). A set Σ( P) of L ∪ { P}-formulas


implicitly defines P if and only Σ( P) explicitly defines P.

Proof. If Σ( P) explicitly defines P then both

Σ( P)  ∀ x1 . . . ∀ xn [( P( x1 , . . . , xn ) ↔ χ( x1 , . . . , xn ))]
0
Σ( P )  ∀ x1 . . . ∀ xn [( P0 ( x1 , . . . , xn ) ↔ χ( x1 , . . . , xn ))]

and the conclusion follows. For the converse: assume that Σ( P) implicitly
defines P. First, we add constant symbols c1 , . . . , cn to L. Then

Σ ( P ) ∪ Σ ( P 0 )  P ( c1 , . . . , c n ) → P 0 ( c1 , . . . , c n ).

By compactness, there are finite sets ∆ 0 ⊆ Σ( P) and ∆ 1 ⊆ Σ( P0 ) such that

∆ 0 ∪ ∆ 1  P ( c1 , . . . , c n ) → P 0 ( c1 , . . . , c n ).

Let θ ( P) be the conjunction of all sentences ϕ( P) such that either ϕ( P) ∈ ∆ 0


or ϕ( P0 ) ∈ ∆ 1 and let θ ( P0 ) be the conjunction of all sentences ϕ( P0 ) such
that either ϕ( P) ∈ ∆ 0 or ϕ( P0 ) ∈ ∆ 1 . Then θ ( P) ∧ θ ( P0 )  P(c1 , . . . , cn ) →
P0 c1 . . . cn . We can re-arrange this so that each predicate symbol occurs on one
side of :
θ ( P ) ∧ P ( c1 , . . . , c n )  θ ( P 0 ) → P 0 ( c1 , . . . , c n ).
By Craig’s Interpolation Theorem there is a sentence χ(c1 , . . . , cn ) not contain-
ing P or P0 such that:

θ ( P ) ∧ P ( c1 , . . . , c n )  χ ( c1 , . . . , c n ); χ ( c1 , . . . , c n )  θ ( P 0 ) → P 0 ( c1 , . . . , c n ).

From the former of these two entailments we have: θ ( P)  P(c1 , . . . , cn ) →


χ(c1 , . . . , cn ). And from the latter, since an L ∪ { P}-model (M, R)  ϕ( P)

Release: (None) ((None)) 315


CHAPTER 23. THE INTERPOLATION THEOREM

if and only if the corresponding L ∪ { P0 }-model (M, R) |= ϕ( P0 ), we have


χ(c1 , . . . , cn )  θ ( P) → P(c1 , . . . , cn ), from which:

θ ( P )  χ ( c1 , . . . , c n ) → P ( c1 , . . . , c n ).

Putting the two together, θ ( P)  P(c1 , . . . , cn ) ↔ χ(c1 , . . . , cn ), and by monotony


and generalization also

Σ( P)  ∀ x1 . . . ∀ xn ( P( x1 , . . . , xn ) ↔ χ( x1 , . . . , xn )).

316 Release: (None) ((None))


Chapter 24

Lindström’s Theorem

24.1 Introduction
In this chapter we aim to prove Lindström’s characterization of first-order
logic as the maximal logic for which (given certain further constraints) the
Compactness and the Downward Löwenheim-Skolem theorems hold (?? and
??). First, we need a more general characterization of the general class of log-
ics to which the theorem applies. We will restrict ourselves to relational lan-
guages, i.e., languages which only contain predicate symbols and individual
constants, but no function symbols.

24.2 Abstract Logics


Definition 24.1. An abstract logic is a pair h L, |= L i, where L is a function that
assigns to each language L a set L(L) of sentences, and |= L is a relation
between structures for the language L and elements of L(L). In particular,
h F, |=i is ordinary first-order logic, i.e., F is the function assigning to the lan-
guage L the set of first-order sentences built from the constants in L, and |= is
the satisfaction relation of first-order logic.

Notice that we are still employing the same notion of structure for a given
language as for first-order logic, but we do not presuppose that sentences are
build up from the basic symbols in L in the usual way, nor that the relation
|= L is recursively defined in the same way as for first-order logic. So for in-
stance the definition, being completely general, is intended to capture the case
where sentences in h L, |= L i contain infinitely long conjunctions or disjunction,
or quantifiers other than ∃ and ∀ (e.g., “there are infinitely many x such that
. . . ”), or perhaps infinitely long quantifier prefixes. To emphasize that “sen-
tences” in L(L) need not be ordinary sentences of first-order logic, in this
chapter we use variables α, β, . . . to range over them, and reserve ϕ, ψ, . . . for
ordinary first-order formulas.

317
CHAPTER 24. LINDSTRÖM’S THEOREM

Definition 24.2. Let Mod L (α) denote the class {M : M |= L α}. If the language
needs to be made explicit, we write ModL L ( α ). Two structures M and N for L
are elementarily equivalent in h L, |= L i, written M ≡ L N, if the same sentences
from L(L) are true in each.

Definition 24.3. An abstract logic h L, |= L i for the language L is normal if it


satisfies the following properties:

1. (L-Monotony) For languages L and L0 , if L ⊆ L0 , then L(L) ⊆ L(L0 ).

2. (Expansion Property) For each α ∈ L(L) there is a finite subset L0 of L


such that the relation M |= L α depends only on the reduct of M to L0 ;
i.e., if M and N have the same reduct to L0 then M |= L α if and only if
N |= L α.

3. (Isomorphism Property) If M |= L α and M ' N then also N |= L α.

4. (Renaming Property) The relation |= L is preserved under renaming: if the


language L0 is obtained from L by replacing each symbol P by a symbol
P0 of the same arity and each constant c by a distinct constant c0 , then
for each structure M and sentence α, M |= L α if and only if M0 |= L α0 ,
where M0 is the L0 -structure corresponding to L and α0 ∈ L(L0 ).

5. (Boolean Property) The abstract logic h L, |= L i is closed under the Boolean


connectives in the sense that for each α ∈ L(L) there is a β ∈ L(L)
such that M |= L β if and only if M 6|= L α, and for each α and β there
is a γ such that Mod L (γ) = Mod L (α) ∩ Mod L ( β). Similarly for atomic
formulas and the other connectives.

6. (Quantifier Property) For each constant c in L and α ∈ L(L) there is a


β ∈ L(L) such that
0
ModL L
L ( β ) = {M : (M, a )} ∈ Mod L ( α ) for some a ∈ |M|},

where L0 = L \ {c} and (M, a) is the expansion of M to L assigning a


to c.

7. (Relativization Property) Given a sentence α ∈ L(L) and symbols R, c1 ,


. . . , cn not in L, there is a sentence β ∈ L(L ∪ { R, c1 , . . . , cn }) called the
relativization of α to R( x, c1 , . . . cn ), such that for each structure M:

(M, X, b1 , . . . , bn ) |= L β) if and only if N |= L α,

where N is the substructure of M with domain |N| = { a ∈ |M| :


RM ( a, b1 , . . . , bn )} (see ??), and (M, X, b1 , . . . , bn ) is the expansion of M
interpreting R, c1 , . . . , cn by X, b1 , . . . , bn , respectively (with X ⊆ Mn+1 ).

318 Release: (None) ((None))


24.3. COMPACTNESS AND LÖWENHEIM-SKOLEM PROPERTIES

Definition 24.4. Given two abstract logics h L1 , |= L1 i and h L2 , |= L2 i we say that


the latter is at least as expressive as the former, written h L1 , |= L1 i ≤ h L2 , |= L2
i, if for each language L and sentence α ∈ L1 (L) there is a sentence β ∈
L2 (L) such that ModL L
L1 ( α ) = Mod L2 ( β ). The logics h L1 , |= L1 i and h L2 , |= L2 i
are equivalent if h L1 , |= L1 i ≤ h L2 , |= L2 i and h L2 , |= L2 i ≤ h L1 , |= L1 i.

Remark 5. First-order logic, i.e., the abstract logic h F, |=i, is normal. In fact,
the above properties are mostly straightforward for first-order logic. We just
remark that the expansion property comes down to extensionality, and that
the relativization of a sentence α to R( x, c1 , . . . , cn ) is obtained by replacing
each subformula ∀ x β by ∀ x ( R( x, c1 , . . . , cn ) → β). Moreover, if h L, |= L i is
normal, then h F, |=i ≤ h L, |= L i, as can be can shown by induction on first-
order formulas. Accordingly, with no loss in generality, we can assume that
every first-order sentence belongs to every normal logic.

24.3 Compactness and Löwenheim-Skolem Properties


We now give the obvious extensions of compactness and Löwenheim-Skolem
to the case of abstract logics.

Definition 24.5. An abstract logic h L, |= L i has the Compactness Property if each


set Γ of L(L)-sentences is satisfiable whenever each finite Γ0 ⊆ Γ is satisfiable.

Definition 24.6. h L, |= L i has the Downward Löwenheim-Skolem property if any


satisfiable Γ has an enumerable model.

The notion of partial isomorphism from ?? is purely “algebraic” (i.e., given


without reference to the sentences of the language but only to the constants
provided by the language L of the structures), and hence it applies to the
case of abstract logics. In case of first-order logic, we know from ?? that if two
structures are partially isomorphic then they are elementarily equivalent. That
proof does not carry over to abstract logics, for induction on formulas need
not be available for arbitrary α ∈ L(L), but the theorem is true nonetheless,
provided the Löwenheim-Skolem property holds.

Theorem 24.7. Suppose h L, |= L i is a normal logic with the Löwenheim-Skolem prop-


erty. Then any two structures that are partially isomorphic are elementarily equiva-
lent in h L, |= L i.

Proof. Suppose M ' p N, but for some α also M |= L α while N 6|= L α. By the
Isomorphism Property we can assume that |M| and |N| are disjoint, and by
the Expansion Property we can assume that α ∈ L(L) for a finite language L.
Let I be a set of partial isomorphisms between M and N, and with no loss of
generality also assume that if p ∈ I and q ⊆ p then also q ∈ I .
|M|<ω is the set of finite sequences of elements of |M|. Let S be the ternary
relation over |M|<ω representing concatenation, i.e., if a, b, c ∈ |M|<ω then

Release: (None) ((None)) 319


CHAPTER 24. LINDSTRÖM’S THEOREM

S(a, b, c) holds if and only if c is the concatenation of a and b; and let T be the
ternary relation such that T (a, b, c) holds for b ∈ M and a, c ∈ |M|<ω if and
only if a = a1 , . . . an and c = a1 , . . . an , b. Pick new 3-place predicate symbols
P and Q and form the structure M∗ having the universe |M| ∪ |M|<ω , having
M as a substructure, and interpreting P and Q by the concatenation relations
S and T (so M∗ is in the language L ∪ { P, Q}).
Define |N|<ω , S0 , T 0 , P0 , Q0 and N∗ analogously. Since by hypothesis M ' p
N, there is a relation I between |M|<ω and |N|<ω such that I (a, b) holds if and
only if a and b are isomorphic and satisfy the back-and-forth condition of ??.
Now, let M be the structure whose domain is the union of the domains of M∗
and N∗ , having M∗ and N∗ as substructures, in the language with one extra
binary predicate symbol R interpreted by the relation I and predicate symbols
denoting the domains |M|∗ and |N| ∗.

I
M N

M∗ N∗

Figure 24.1: The structure M with the internal partial isomorphism.


The crucial observation is that in the language of the structure M there is a
first-order sentence θ1 true in M saying that M |= L α and N 6|= L α (this requires
the Relativization Property), as well as a first-order sentence θ2 true in M say-
ing that M ' p N via the partial isomorphism I. By the Löwenheim-Skolem
Property, θ1 and θ2 are jointly true in an enumerable model M0 containing par-
tially isomorphic substructures M0 and N0 such that M0 |= L α and N0 6|= L α.
But enumerable partially isomorphic structures are in fact isomorphic by ??,
contradicting the Isomorphism Property of normal abstract logics.

24.4 Lindström’s Theorem


Lemma 24.8. Suppose α ∈ L(L), with L finite, and assume also that there is an
n ∈ N such that for any two structures M and N, if M ≡n N and M |= L α then
also N |= L α. Then α is equivalent to a first-order sentence, i.e., there is a first-order
θ such that Mod L (α) = Mod L (θ ).

Proof. Let n be such that any two n-equivalent structures M and N agree on
the value assigned to α. Recall ??: there are only finitely many first-order
sentences in a finite language that have quantifier rank no greater than n, up to

320 Release: (None) ((None))


24.4. LINDSTRÖM’S THEOREM

logical equivalence. Now, for each fixed structure M let θM be the conjunction
of all first-order sentences α true in M with qr(α) ≤ n (this conjunction is
finite), so that N |= θM if and only if N ≡n M. Then put θ = {θM : M |= L
W

α}; this disjunction is also finite (up to logical equivalence).


The conclusion Mod L (α) = Mod L (θ ) follows. In fact, if N |= L θ then for
some M |= L α we have N |= θM , whence also N |= L α (by the hypothesis
of the lemma). Conversely, if N |= L α then θN is a disjunct in θ, and since
N |= θN , also N |= L θ.

Theorem 24.9 (Lindström’s Theorem). Suppose h L, |= L i has the Compactness and


the Löwenheim-Skolem Properties. Then h L, |= L i ≤ h F, |=i (so h L, |= L i is equivalent
to first-order logic).

Proof. By ??, it suffices to show that for any α ∈ L(L), with L finite, there
is n ∈ N such that for any two structures M and N: if M ≡n N then M
and N agree on α. For then α is equivalent to a first-order sentence, from
which h L, |= L i ≤ h F, |=i follows. Since we are working in a finite, purely
relational language, by ?? we can replace the statement that M ≡n N by the
corresponding algebraic statement that In (∅, ∅).
Given α, suppose towards a contradiction that for each n there are struc-
tures Mn and Nn such that In (∅, ∅), but (say) Mn |= L α whereas Nn 6|= L α. By
the Isomorphism Property we can assume that all the Mn ’s interpret the con-
stants of the language by the same objects; furthermore, since there are only
finitely many atomic sentences in the language, we may also assume that they
satisfy the same atomic sentences (we can take a subsequence of the M’s oth-
erwise). Let M be the union of all the Mn ’s, i.e., the unique minimal structure
having each Mn as a substructure. As in the proof of ??, let M∗ be the exten-
sion of M with domain |M| ∪ |M|<ω , in the expanded language comprising
the concatenation predicates P and Q.
Similarly, define Nn , N and N∗ . Now let M be the structure whose domain
comprises the domains of M∗ and N∗ as well as the natural numbers N along
with their natural ordering ≤, in the language with extra predicates represent-
ing the domains |M|, |N|, |M|<ω and |N|<ω as well as predicates coding the
domains of Mn and Nn in the sense that:

|Mn | = { a ∈ |M| : R( a, n)}; |Nn | = { a ∈ |N| : S( a, n)};


|M| <
n
ω
= { a ∈ |M| <ω
: R( a, n)}; |N| <
n
ω
= { a ∈ |N|<ω : S( a, n)}.

The structure M also has a ternary relation J such that J (n, a, b) holds if and
only if In (a, b).
Now there is a sentence θ in the language L augmented by R, S, J, etc.,
saying that ≤ is a discrete linear ordering with first but no last element and
such that Mn |= α, Nn 6|= α, and for each n in the ordering, J (n, a, b) holds if
and only if In (a, b).

Release: (None) ((None)) 321


CHAPTER 24. LINDSTRÖM’S THEOREM

Using the Compactness Property, we can find a model M∗ of θ in which


the ordering contains a non-standard element n∗ . In particular then M∗ will
contain substructures Mn∗ and Nn∗ such that Mn∗ |= L α and Nn∗ 6|= L α. But
now we can define a set I of pairs of k-tuples from |Mn∗ | and |Nn∗ | by putting
ha, bi ∈ I if and only if J (n∗ − k, a, b), where k is the length of a and b. Since
n∗ is non-standard, for each standard k we have that n∗ − k > 0, and the set I
witnesses the fact that Mn∗ ' p Nn∗ . But by ??, Mn∗ is L-equivalent to Nn∗ , a
contradiction.

322 Release: (None) ((None))


Part V

Computability

323
CHAPTER 24. LINDSTRÖM’S THEOREM

This part is based on Jeremy Avigad’s notes on computability theory.


Only the chapter on recursive functions contains exercises yet, and every-
thing could stand to be expanded with motivation, examples, details, and
exercises.

324 Release: (None) ((None))


Chapter 25

Recursive Functions

These are Jeremy Avigad’s notes on recursive functions, revised and


expanded by Richard Zach. This chapter does contain some exercises,
and can be included independently to provide the basis for a discussion
of arithmetization of syntax.

25.1 Introduction
In order to develop a mathematical theory of computability, one has to first
of all develop a model of computability. We now think of computability as the
kind of thing that computers do, and computers work with symbols. But at
the beginning of the development of theories of computability, the paradig-
matic example of computation was numerical computation. Mathematicians
were always interested in number-theoretic functions, i.e., functions f : Nn →
N that can be computed. So it is not surprising that at the beginning of the
theory of computability, it was such functions that were studied. The most
familiar examples of computable numerical functions, such as addition, mul-
tiplication, exponentiation (of natural numbers) share an interesting feature:
they can be defined recursively. It is thus quite natural to attempt a general
definition of computable function on the basis of recursive definitions. Among
the many possible ways to define number-theoretic functions recursively, one
particulalry simple pattern of definition here becomes central: so-called prim-
itive recursion.
In addition to computable functions, we might be interested in computable
sets and relations. A set is computable if we can compute the answer to
whether or not a given number is an element of the set, and a relation is com-
putable iff we can compute whether or not a tuple hn1 , . . . , nk i is an element
of the relation. By considering the characteristic function of a set or relation,
discussion of computable sets and relations can be subsumed under that of

325
CHAPTER 25. RECURSIVE FUNCTIONS

computable functions. Thus we can define primitive recursive relations as


well, e.g., the relation “n evenly divides m” is a primitive recursive relation.
Primitive recursive functions—those that can be defined using just primi-
tive recursion—are not, however, the only computable number-theoretic func-
tions. Many generalizations of primitive recursion have been considered, but
the most powerful and widely-accepted additional way of computing func-
tions is by unbounded search. This leads to the definition of partial recur-
sive functions, and a related definition to general recursive functions. General
recursive functions are computable and total, and the definition character-
izes exactly the partial recursive functions that happen to be total. Recursive
functions can simulate every other model of computation (Turing machines,
lambda calculus, etc.) and so represent one of the many accepted models of
computation.

25.2 Primitive Recursion


Suppose we specify that a certain function l from N to N satisfies the follow-
ing two clauses:

l (0) = 1
l ( x + 1) = 2 · l ( x ).

It is pretty clear that there is only one function, l, that meets these two criteria.
This is an instance of a definition by primitive recursion. We can define even
more fundamental functions like addition and multiplication by

f ( x, 0) = x
f ( x, y + 1) = f ( x, y) + 1

and

g( x, 0)= 0
g( x, y + 1) = f ( g( x, y), x ).

Exponentiation can also be defined recursively, by

h( x, 0)= 1
h( x, y + 1) = g(h( x, y), x ).

We can also compose functions to build more complex ones; for example,

k( x) = x x + ( x + 3) · x
= f (h( x, x ), g( f ( x, 3), x )).
Let zero( x ) be the function that always returns 0, regardless of what x is,
and let succ( x ) = x + 1 be the successor function. The set of primitive recursive

326 Release: (None) ((None))


25.2. PRIMITIVE RECURSION

functions is the set of functions from Nn to N that you get if you start with
zero and succ by iterating the two operations above, primitive recursion and
composition. The idea is that primitive recursive functions are defined in a
straightforward and explicit way, so that it is intuitively clear that each one
can be computed using finite means.

Definition 25.1. If f is a k-place function and g0 , . . . , gk−1 are l-place functions


on the natural numbers, the composition of f with g0 , . . . , gk−1 is the l-place
function h defined by

h( x0 , . . . , xl −1 ) = f ( g0 ( x0 , . . . , xl −1 ), . . . , gk−1 ( x0 , . . . , xl −1 )).

Definition 25.2. If f is a k-place function and g is a (k + 2)-place function,


then the function defined by primitive recursion from f and g is the (k + 1)-place
function h defined by the equations

h(0, z0 , . . . , zk−1 )= f ( z 0 , . . . , z k −1 )
h( x + 1, z0 , . . . , zk−1 ) = g( x, h( x, z0 , . . . , zk−1 ), z0 , . . . , zk−1 )

In addition to zero and succ, we will include among primitive recursive


functions the projection functions,

Pin ( x0 , . . . , xn−1 ) = xi ,

for each natural number n and i < n. These are not terribly exciting in them-
selves: Pin is simply the k-place function that always returns its ith argument.
But the allow us to define new functions by disregarding arguments or switch-
ing arguments, as we’ll see later.
In the end, we have the following:

Definition 25.3. The set of primitive recursive functions is the set of functions
from Nn to N, defined inductively by the following clauses:

1. zero is primitive recursive.

2. succ is primitive recursive.

3. Each projection function Pin is primitive recursive.

4. If f is a k-place primitive recursive function and g0 , . . . , gk−1 are l-place


primitive recursive functions, then the composition of f with g0 , . . . , gk−1
is primitive recursive.

5. If f is a k-place primitive recursive function and g is a k + 2-place primi-


tive recursive function, then the function defined by primitive recursion
from f and g is primitive recursive.

Release: (None) ((None)) 327


CHAPTER 25. RECURSIVE FUNCTIONS

Put more concisely, the set of primitive recursive functions is the smallest
set containing zero, succ, and the projection functions Pjn , and which is closed
under composition and primitive recursion.
Another way of describing the set of primitive recursive functions keeps
track of the “stage” at which a function enters the set. Let S0 denote the set of
starting functions: zero, succ, and the projections. Once Si has been defined,
let Si+1 be the set of all functions you get by applying a single instance of
composition or primitive recursion to functions in Si . Then
[
S= Si
i ∈N

is the set of all primitive recursive functions


Our definition of composition may seem too rigid, since g0 , . . . , gk−1 are
all required to have the same arity l. (Remember that the arity of a function
is the number of arguments; an l-place function has arity l.) But adding the
projection functions provides the desired flexibility. For example, suppose f
and g are 3-place functions and h is the 2-place function defined by

h( x, y) = f ( x, g( x, x, y), y).

The definition of h can be rewritten with the projection functions, as

h( x, y) = f ( P02 ( x, y), g( P02 ( x, y), P02 ( x, y), P12 ( x, y)), P12 ( x, y)).

Then h is the composition of f with P02 , l, and P12 , where

l ( x, y) = g( P02 ( x, y), P02 ( x, y), P12 ( x, y)),

i.e., l is the composition of g with P02 , P02 , and P12 .


For another example, let us again consider addition. This is described re-
cursively by the following two equations:

x+0 = x
x + (y + 1) = succ( x + y).

In other words, addition is the function add defined recursively by the equa-
tions

add(0, x )
= x
add(y + 1, x ) = succ(add(y, x )).

But even this is not a strict primitive recursive definition; we need to put it in
the form

add(0, x ) = f (x)
add(y + 1, x ) = g(y, add(y, x ), x )

328 Release: (None) ((None))


25.3. PRIMITIVE RECURSIVE FUNCTIONS ARE COMPUTABLE

for some 1-place primitive recursive function f and some 3-place primitive
recursive function g. We can take f to be P01 , and we can define g using com-
position,
g(y, w, x ) = succ( P13 (y, w, x )).

The function g, being the composition of basic primitive recursive functions,


is primitive recursive; and hence so is h. (Note that, strictly speaking, we
have defined the function g(y, x ) meeting the recursive specification of x +
y; in other words, the variables are in a different order. Luckily, addition is
commutative, so here the difference is not important; otherwise, we could
define the function g0 by

g0 ( x, y) = g( P12 (y, x )), P02 (y, x )) = g(y, x ),

using composition.
One advantage to having the precise description of the primitive recur-
sive functions is that we can be systematic in describing them. For example,
we can assign a “notation” to each such function, as follows. Use symbols
zero, succ, and Pin for zero, successor, and the projections. Now suppose f
is defined by composition from a k-place function h and l-place functions g0 ,
. . . , gk−1 , and we have assigned notations H, G0 , . . . , Gk−1 to the latter func-
tions. Then, using a new symbol Compk,l , we can denote the function f by
Compk,l [ H, G0 , . . . , Gk−1 ]. For the functions defined by primitive recursion,
we can use analogous notations of the form Reck [ G, H ], where k denotes that
arity of the function being defined. With this setup, we can denote the addi-
tion function by
Rec2 [ P01 , Comp1,3 [succ, P13 ]].

Having these notations sometimes proves useful.

25.3 Primitive Recursive Functions are Computable


Suppose a function h is defined by primitive recursion

h(0, ~z) = f (~z)


h( x + 1, ~z) = g( x, h( x, ~z), ~z)

and suppose the functions f and g are computable. Then h(0, ~z) can obviously
be computed, since it is just f (~z) which we assume is computable. h(1, ~z) can
then also be computed, since 1 = 0 + 1 and so h(1, ~z) is just

g(0, h(0, ~z), ~z) = g(0, f (~z), ~z).

Release: (None) ((None)) 329


CHAPTER 25. RECURSIVE FUNCTIONS

We can go on in this way and compute

h(2, ~z) = g(1, g(0, f (~z), ~z), ~z)


h(3, ~z) = g(2, g(1, g(0, f (~z), ~z), ~z), ~z)
h(4, ~z) = g(3, g(2, g(1, g(0, f (~z), ~z), ~z), ~z), ~z)
..
.

Thus, to compute h( x, ~z) in general, successively compute h(0, ~z), h(1, ~z), . . . ,
until we reach h( x, ~z).
Thus, primitive recursion yields a new computable function if the func-
tions f and g are computable. Composition of functions also results in a com-
putable function if the functions f and gi are computable.
Since the basic functions zero, succ, and Pin are computable, and compo-
sition and primitive recursion yield computable functions from computable
functions, his means that every primitive recursive function is computable.

25.4 Examples of Primitive Recursive Functions


Here are some examples of primitive recursive functions:

1. Constants: for each natural number n, the function that always returns n
primitive recursive function, since it is equal to succ(succ(. . . succ(zero( x )))).

2. The identity function: id( x ) = x, i.e. P01

3. Addition, x + y

4. Multiplication, x · y

5. Exponentiation, x y (with 00 defined to be 1)

6. Factorial, x! = 1 · 2 · 3 · · · · · x

7. The predecessor function, pred( x ), defined by

pred(0) = 0, pred( x + 1) = x

8. Truncated subtraction, x −̇ y, defined by

x −̇ 0 = x, x −̇ (y + 1) = pred( x −̇ y)

9. Maximum, max( x, y), defined by

max( x, y) = x + (y −̇ x )

10. Minimum, min( x, y)

330 Release: (None) ((None))


25.4. EXAMPLES OF PRIMITIVE RECURSIVE FUNCTIONS

11. Distance between x and y, | x − y|

In our definitions, we’ll often use constants n. This is ok because the con-
stant function constn ( x ) is primitive recursive (defined from zero and succ).
So if, e.g., we want to define the function f ( x ) = 2 · x can obtain it by com-
position from constn ( x ) and multiplication as f ( x ) = const2 ( x ) · P01 ( x ). We’ll
make use of this trick from now on.
You’ll also have noticed that the definition of pred does not, strictly speak-
ing, fit into the pattern of definition by primitive recursion, since that pattern
requires an extra argument. It is also odd in that it does not actually pred( x )
in the definition of pred( x + 1). But we can define pred0 ( x, y) by

pred0 (0, y) = zero(y) = 0


pred0 ( x + 1, y) = P03 ( x, pred0 ( x, y), y) = x

and then define pred from it by composition, e.g., as pred( x ) = pred0 ( P01 ( x ), zero( x )).

The set of primitive recursive functions is further closed under the follow-
ing two operations:

1. Finite sums: if f ( x, ~z) is primitive recursive, then so is the function

y
g(y, ~z) = ∑ f ( x, ~z).
x =0

2. Finite products: if f ( x, ~z) is primitive recursive, then so is the function

y
h(y, ~z) = ∏ f (x,~z).
x =0

For example, finite sums are defined recursively by the equations

g(0, ~z) = f (0, ~z), g(y + 1, ~z) = g(y, ~z) + f (y + 1, ~z).

We can also define boolean operations, where 1 stands for true, and 0 for false:

1. Negation, not( x ) = 1 −̇ x

2. Conjunction, and( x, y) = x · y

Other classical boolean operations like or( x, y) and ifthen( x, y) can be defined
from these in the usual way.

Release: (None) ((None)) 331


CHAPTER 25. RECURSIVE FUNCTIONS

25.5 Primitive Recursive Relations


Definition 25.4. A relation R(~x ) is said to be primitive recursive if its charac-
teristic function, 
1 if R(~x )
χ R (~x ) =
0 otherwise
is primitive recursive.

In other words, when one speaks of a primitive recursive relation R(~x ),


one is referring to a relation of the form χ R (~x ) = 1, where χ R is a primitive
recursive function which, on any input, returns either 1 or 0. For example,
the relation IsZero( x ), which holds if and only if x = 0, corresponds to the
function χIsZero , defined using primitive recursion by

χIsZero (0) = 1, χIsZero ( x + 1) = 0.

It should be clear that one can compose relations with other primitive re-
cursive functions. So the following are also primitive recursive:

1. The equality relation, x = y, defined by IsZero(| x − y|)

2. The less-than relation, x ≤ y, defined by IsZero( x −̇ y)

Furthermore, the set of primitive recursive relations is closed under boolean


operations:

1. Negation, ¬ P

2. Conjunction, P ∧ Q

3. Disjunction, P ∨ Q

4. If . . . then, P → Q

are all primitive recursive, if P and Q are. For suppose χ P (~z) an χQ (~z) are
primitive recursive. Then the relation R(~z) that holds iff both P(~z) and Q(~z)
hold has the characteristic function χ R (~z) = and(χ P (~z), χQ (~z)).
One can also define relations using bounded quantification:

1. Bounded universal quantification: if R( x, ~z) is a primitive recursive re-


lation, then so is the relation

(∀ x < y) R( x, ~z)

which holds if and only if R( x, ~z) holds for every x less than y.

2. Bounded existential quantification: if R( x, ~z) is a primitive recursive re-


lation, then so is
(∃ x < y) R( x, ~z).

332 Release: (None) ((None))


25.6. BOUNDED MINIMIZATION

By convention, we take (∀ x < 0) R( x, ~z) to be true (for the trivial reason


that there are no x less than 0) and (∃ x < 0) R( x, ~z) to be false. A universal
quantifier functions just like a finite product; it can also be defined directly by

g(0, ~z) = 1, g(y + 1, ~z) = and( g(y, ~z), χ R (y, ~z)).

Bounded existential quantification can similarly be defined using or. Alter-


natively, it can be defined from bounded universal quantification, using the
equivalence, (∃ x < y) ϕ( x ) ↔ ¬(∀ x < y) ¬ ϕ( x ). Note that, for exam-
ple, a bounded quantifier of the form (∃ x ≤ y) . . . x . . . is equivalent to
(∃ x < y + 1) . . . x . . . .
Another useful primitive recursive function is:

1. The conditional function, cond( x, y, z), defined by



y if x = 0
cond( x, y, z) =
z otherwise

This is defined recursively by

cond(0, y, z) = y, cond( x + 1, y, z) = z.

One can use this to justify:

1. Definition by cases: if g0 (~x ), . . . , gm (~x ) are functions, and R1 (~x ), . . . , Rm−1 (~x )
are relations, then the function f defined by


 g0 (~x ) if R0 (~x )


 g 1 (~
x ) if R1 (~x ) and not R0 (~x )
.

f (~x ) = ..

 gm−1 (~x ) if Rm−1 (~x ) and none of the previous hold




gm (~x ) otherwise

is also primitive recursive.

When m = 1, this is just the function defined by

f (~x ) = cond(χ¬ R0 (~x ), g0 (~x ), g1 (~x )).

For m greater than 1, one can just compose definitions of this form.

25.6 Bounded Minimization


It is often useful to define a function as the least number satisfying some prop-
erty or relation P. If P is decidable, we can compute this function simply by
trying out all the possible numbers, 0, 1, 2, . . . , until we find the least one satis-
fying P. This kind of unbounded search takes us out of the realm of primitive

Release: (None) ((None)) 333


CHAPTER 25. RECURSIVE FUNCTIONS

recursive functions. However, if we’re only interested in the least number


less than some indipendently given bound, we stay primitive recursive. In other
words, and a bit more generally, suppose we have a primitive recursive rela-
tion R( x, z). Consider the function that maps y and z to the least x < y such
that R( x, z). It, too, can be computed, by testing whether R(0, z), R(1, z), . . . ,
R(y − 1, z). But why is it primitive recursive?

Proposition 25.5. If R( x, ~z) is primitive recursive, so is the function m R (y, ~z) which
returns the least x less than y such that R( x, ~z) holds, if there is one, and 0 otherwise.
We will write the function m R as

(min x < y) R( x, ~z),

Proof. Note than there can be no x < 0 such that R( x, ~z) since there is no x < 0
at all. So m R ( x, 0) = 0.
In case the bound is y + 1 we have three cases: (a) There is an x < y such
that R( x, ~z), in which case m R (y + 1, ~z) = m R (y, ~z). (b) There is no such x
but R(y, ~z) holds, then m R (y + 1, ~z) = y. (c) There is no x < y + 1 such that
R( x, ~z), then m R (y + 1, ~z) = 0. So,

m R (0, ~z) = 0

m R (y, ~z)
 if (∃ x < y) R( x, ~z)
m R (y + 1, ~z) = y otherwise, provided R(y, ~z)

0 otherwise.

The choice of “0 otherwise” is somewhat arbitrary. It is in fact even easier


to recursively define the function m0R which returns the least x less than y such
that R( x, ~z) holds, and y + 1 otherwise. When we use min, however, we will
always know that the least x such that R( x, ~z) exists and is less than y. Thus,
in practice, we will not have to worry about the possibility that if (min x <
y) R( x, ~z) = 0 we do not know if that value indicates that R(0, ~z) or that for
no x < y, R( x, ~z). As with bounded quantification, (min x ≤ y) . . . can be
understood as (min x < y + 1) . . . .

25.7 Primes
Bounded quantification and bounded minimization provide us with a good
deal of machinery to show that natural functions and relations are primitive
recursive. For example, consider the relation relation “x divides y”, written
x | y. x | y holds if division of x by y is possible without remainder, i.e., if y is
an integer multiple of x. (If it doesn’t hold, i.e., the remainder when dividing
x by y is > 0, we write x - y.) In other words, x | y iff for some z, x · z = y.

334 Release: (None) ((None))


25.8. SEQUENCES

Obviously, any such z, if it exists, must be ≤ y. So, we have that x | y iff for
some z ≤ y, x · z = y. We can define the relation x | y by bounded existential
quantification from = and multiplication by
x | y ⇔ (∃z ≤ y) ( x · z) = y.
We’ve thus shown that x | y is primitive recursive.
A natural number x is prime if it is neither 0 nor 1 and is only divisible by
1 and itself. In other words, prime numbers are such that, whenever y | x,
either y = 1 or y = x. To test if x is prime, we only have to check if y | x for
all y ≤ x, since if y > x, then automatically y - x. So, the relation Prime( x ),
which holds iff x is prime, can be defined by
Prime( x ) ⇔ x ≥ 2 ∧ (∀y ≤ x ) (y | x → y = 1 ∨ y = x )
and is thus primitive recursive.
The primes are 2, 3, 5, 7, 11, etc. Consider the function p( x ) which returns
the xth prime in that sequence, i.e., p(0) = 2, p(1) = 3, p(2) = 5, etc. (For
convenience we will often write p( x ) as p x (p0 = 2, p1 = 3, etc.)
If we had a function nextPrime(x), which returns the first prime number
larger than x, p can be easily defined using primitive recursion:
p (0) = 2
p( x + 1) = nextPrime( p( x ))
Since nextPrime( x ) is the least y such that y > x and y is prime, it can be
easily computed by unbounded search. But it can also be defined by bounded
minimization, thanks to a result due to Euclid: there is always a prime number
between x and x! + 1.
nextPrime(x) = (min y ≤ x! + 1) (y > x ∧ Prime(y)).
This shows, that nextPrime( x ) and hence p( x ) are (not just computable but)
primitive recursive.
(If you’re curious, here’s a quick proof of Euclid’s theorem. Suppose pn
is the largest prime ≤ x and consider the product p = p0 · p1 · · · · · pn of all
primes ≤ x. Either p + 1 is prime or there is a prime between x and p + 1.
Why? Suppose p + 1 is not prime. Then some prime number q | p + 1 where
q < p + 1. None of the primes ≤ x divide p + 1. (By definition of p, each
of the primes pi ≤ x divides p, i.e., with remainder 0. So, each of the primes
pi ≤ x divides p + 1 with remainder 1, and so pi - p + 1.) Hence, q is a prime
> x and < p + 1. And p ≤ x!, so there is a prime > x and ≤ x! + 1.)

25.8 Sequences
The set of primitive recursive functions is remarkably robust. But we will be
able to do even more once we have developed an adequate means of handling

Release: (None) ((None)) 335


CHAPTER 25. RECURSIVE FUNCTIONS

sequences. We will identify finite sequences of natural numbers with natural


numbers in the following way: the sequence h a0 , a1 , a2 , . . . , ak i corresponds to
the number
a +1 a +1 a +1
p00 · p11 · p2a2 +1 · · · · · pkk .
We add one to the exponents to guarantee that, for example, the sequences
h2, 7, 3i and h2, 7, 3, 0, 0i have distinct numeric codes. We can take both 0 and 1
to code the empty sequence; for concreteness, let Λ denote 0.
Let us define the following functions:

1. len(s), which returns the length of the sequence s: Let R(i, s) be the rela-
tion defined by

R(i, s) iff pi | s ∧ (∀ j < s) ( j > i → p j 6 | s)

R is primitive recursive. Now let


(
0 if s = 0 or s = 1
len(s) =
1 + (min i < s) R(i, s) otherwise

Note that we need to bound the search on i; clearly s provides an accept-


able bound.

2. append(s, a), which returns the result of appending a to the sequence s:


(
2 a +1 if s = 0 or s = 1
append(s, a) = a +1
s· plen (s)
otherwise

3. element(s, i ), which returns the ith element of s (where the initial ele-
ment is called the 0th), or 0 if i is greater than or equal to the length of
s: (
0 if i ≥ len(s)
element(s, i ) = j +2
min j < s ( pi 6 | s) − 1 otherwise

Instead of using the official names for the functions defined above, we
introduce a more compact notation. We will use (s)i instead of element(s, i ),
and hs0 , . . . , sk i to abbreviate

append(append(. . . append(Λ, s0 ) . . . ), sk ).

Note that if s has length k, the elements of s are (s)0 , . . . , (s)k−1 .


It will be useful for us to be able to bound the numeric code of a sequence
in terms of its length and its largest element. Suppose s is a sequence of length
k, each element of which is less than equal to some number x. Then s has at

336 Release: (None) ((None))


25.9. OTHER RECURSIONS

most k prime factors, each at most pk−1 , and each raised to at most x + 1 in the
prime factorization of s. In other words, if we define

k ·( x +1)
sequenceBound( x, k) = pk−1 ,

then the numeric code of the sequence s described above is at most sequenceBound( x, k).
Having such a bound on sequences gives us a way of defining new func-
tions using bounded search. For example, suppose we want to define the
function concat(s, t), which concatenates two sequences. One first option is to
define a “helper” function hconcat(s, t, n) which concatenates the first n sym-
bols of t to s. This function can be defined by primitive recursion, as follows:

hconcat(s, t, 0) = s
hconcat(s, t, n + 1) = append(hconcat(s, t, n), (t)n )

Then we can define concat by

concat(s, t) = hconcat(s, t, len(t)).

But using bounded search, we can be lazy. All we need to do is write down a
primitive recursive specification of the object (number) we are looking for, and
a bound on how far to look. The following works:

concat(s, t) = (min v < sequenceBound(s + t, len(s) + len(t)))


(len(v) = len(s) + len(t) ∧
(∀i < len(s)) ((v)i = (s)i ) ∧
(∀ j < len(t)) ((v)len(s)+ j = (t) j ))

We will write s _ t instead of concat(s, t).

25.9 Other Recursions


Using pairing and sequencing, we can justify more exotic (and useful) forms
of primitive recursion. For example, it is often useful to define two functions
simultaneously, such as in the following definition:

f 0 (0, ~z) = k0 (~z)


f 1 (0, ~z) = k1 (~z)
f 0 ( x + 1, ~z) = h0 ( x, f 0 ( x, ~z), f 1 ( x, ~z), ~z)
f 1 ( x + 1, ~z) = h1 ( x, f 0 ( x, ~z), f 1 ( x, ~z), ~z)

This is an instance of simultaneous recursion. Another useful way of defining


functions is to give the value of f ( x + 1, ~z) in terms of all the values f (0, ~z),

Release: (None) ((None)) 337


CHAPTER 25. RECURSIVE FUNCTIONS

. . . , f ( x, ~z), as in the following definition:


f (0, ~z) = g(~z)
f ( x + 1, ~z) = h( x, h f (0, ~z), . . . , f ( x, ~z)i, ~z).
The following schema captures this idea more succinctly:
f ( x, ~z) = h( x, h f (0, ~z), . . . , f ( x − 1, ~z)i)
with the understanding that the second argument to h is just the empty se-
quence when x is 0. In either formulation, the idea is that in computing the
“successor step,” the function f can make use of the entire sequence of values
computed so far. This is known as a course-of-values recursion. For a particular
example, it can be used to justify the following type of definition:

h( x, f (k( x, ~z), ~z), ~z) if k( x, ~z) < x
f ( x, ~z) =
g( x, ~z) otherwise
In other words, the value of f at x can be computed in terms of the value of f
at any previous value, given by k.
You should think about how to obtain these functions using ordinary prim-
itive recursion. One final version of primitive recursion is more flexible in that
one is allowed to change the parameters (side values) along the way:
f (0, ~z) = g(~z)
f ( x + 1, ~z) = h( x, f ( x, k(~z)), ~z)
This, too, can be simulated with ordinary primitive recursion. (Doing so is
tricky. For a hint, try unwinding the computation by hand.)
Finally, notice that we can always extend our “universe” by defining addi-
tional objects in terms of the natural numbers, and defining primitive recur-
sive functions that operate on them. For example, we can take an integer to
be given by a pair hm, ni of natural numbers, which, intuitively, represents the
integer m − n. In other words, we say
Integer( x ) ⇔ length( x ) = 2
and then we define the following:
1. iequal( x, y)
2. iplus( x, y)
3. iminus( x, y)
4. itimes( x, y)
Similarly, we can define a rational number to be a pair h x, yi of integers with
y 6= 0, representing the value x/y. And we can define qequal, qplus, qminus,
qtimes, qdivides, and so on.

338 Release: (None) ((None))


25.10. NON-PRIMITIVE RECURSIVE FUNCTIONS

25.10 Non-Primitive Recursive Functions


The primitive recursive functions do not exhaust the intuitively computable
functions. It should be intuitively clear that we can make a list of all the unary
primitive recursive functions, f 0 , f 1 , f 2 , . . . such that we can effectively com-
pute the value of f x on input y; in other words, the function g( x, y), defined
by
g( x, y) = f x (y)
is computable. But then so is the function

h( x ) = g( x, x ) + 1
= f x ( x ) + 1.

For each primitive recursive function f i , the value of h and f i differ at i. So h


is computable, but not primitive recursive; and one can say the same about g.
This is a an “effective” version of Cantor’s diagonalization argument.
One can provide more explicit examples of computable functions that are
not primitive recursive. For example, let the notation gn ( x ) denote g( g(. . . g( x ))),
with n g’s in all; and define a sequence g0 , g1 , . . . of functions by

g0 ( x )
= x+1
gn+1 ( x ) = gnx ( x )

You can confirm that each function gn is primitive recursive. Each successive
function grows much faster than the one before; g1 ( x ) is equal to 2x, g2 ( x )
is equal to 2x · x, and g3 ( x ) grows roughly like an exponential stack of x 2’s.
Ackermann’s function is essentially the function G ( x ) = gx ( x ), and one can
show that this grows faster than any primitive recursive function.
Let us return to the issue of enumerating the primitive recursive functions.
Remember that we have assigned symbolic notations to each primitive recur-
sive function; so it suffices to enumerate notations. We can assign a natural
number #( F ) to each notation F, recursively, as follows:

#(0) = h0i
#( S ) = h1i
#( Pin ) = h2, n, i i
#(Compk,l [ H, G0 , . . . , Gk−1 ]) = h3, k, l, #( H ), #( G0 ), . . . , #( Gk−1 )i
#(Recl [ G, H ]) = h4, l, #( G ), #( H )i

Here I am using the fact that every sequence of numbers can be viewed as
a natural number, using the codes from the last section. The upshot is that
every code is assigned a natural number. Of course, some sequences (and
hence some numbers) do not correspond to notations; but we can let f i be the
unary primitive recursive function with notation coded as i, if i codes such a

Release: (None) ((None)) 339


CHAPTER 25. RECURSIVE FUNCTIONS

notation; and the constant 0 function otherwise. The net result is that we have
an explicit way of enumerating the unary primitive recursive functions.
(In fact, some functions, like the constant zero function, will appear more
than once on the list. This is not just an artifact of our coding, but also a result
of the fact that the constant zero function has more than one notation. We will
later see that one can not computably avoid these repetitions; for example,
there is no computable function that decides whether or not a given notation
represents the constant zero function.)
We can now take the function g( x, y) to be given by f x (y), where f x refers
to the enumeration we have just described. How do we know that g( x, y) is
computable? Intuitively, this is clear: to compute g( x, y), first “unpack” x,
and see if it a notation for a unary function; if it is, compute the value of that
function on input y.
You may already be convinced that (with some work!) one can write
a program (say, in Java or C++) that does this; and now we can appeal to
the Church-Turing thesis, which says that anything that, intuitively, is com-
putable can be computed by a Turing machine.
Of course, a more direct way to show that g( x, y) is computable is to de-
scribe a Turing machine that computes it, explicitly. This would, in partic-
ular, avoid the Church-Turing thesis and appeals to intuition. But, as noted
above, working with Turing machines directly is unpleasant. Soon we will
have built up enough machinery to show that g( x, y) is computable, appeal-
ing to a model of computation that can be simulated on a Turing machine:
namely, the recursive functions.

25.11 Partial Recursive Functions


To motivate the definition of the recursive functions, note that our proof that
there are computable functions that are not primitive recursive actually estab-
lishes much more. The argument was simple: all we used was the fact was
that it is possible to enumerate functions f 0 , f 1 , . . . such that, as a function of
x and y, f x (y) is computable. So the argument applies to any class of functions
that can be enumerated in such a way. This puts us in a bind: we would like to
describe the computable functions explicitly; but any explicit description of a
collection of computable functions cannot be exhaustive!
The way out is to allow partial functions to come into play. We will see
that it is possible to enumerate the partial computable functions. In fact, we
already pretty much know that this is the case, since it is possible to enumerate
Turing machines in a systematic way. We will come back to our diagonal
argument later, and explore why it does not go through when partial functions
are included.
The question is now this: what do we need to add to the primitive recur-
sive functions to obtain all the partial recursive functions? We need to do two

340 Release: (None) ((None))


25.11. PARTIAL RECURSIVE FUNCTIONS

things:

1. Modify our definition of the primitive recursive functions to allow for


partial functions as well.

2. Add something to the definition, so that some new partial functions are
included.

The first is easy. As before, we will start with zero, successor, and projec-
tions, and close under composition and primitive recursion. The only differ-
ence is that we have to modify the definitions of composition and primitive
recursion to allow for the possibility that some of the terms in the definition
are not defined. If f and g are partial functions, we will write f ( x ) ↓ to mean
that f is defined at x, i.e., x is in the domain of f ; and f ( x ) ↑ to mean the
opposite, i.e., that f is not defined at x. We will use f ( x ) ' g( x ) to mean that
either f ( x ) and g( x ) are both undefined, or they are both defined and equal.
We will use these notations for more complicated terms as well. We will adopt
the convention that if h and g0 , . . . , gk all are partial functions, then

h( g0 (~x ), . . . , gk (~x ))

is defined if and only if each gi is defined at ~x, and h is defined at g0 (~x ),


. . . , gk (~x ). With this understanding, the definitions of composition and prim-
itive recursion for partial functions is just as above, except that we have to
replace “=” by “'”.
What we will add to the definition of the primitive recursive functions to
obtain partial functions is the unbounded search operator. If f ( x, ~z) is any partial
function on the natural numbers, define µx f ( x, ~z) to be

the least x such that f (0, ~z), f (1, ~z), . . . , f ( x, ~z) are all defined, and
f ( x, ~z) = 0, if such an x exists

with the understanding that µx f ( x, ~z) is undefined otherwise. This defines


µx f ( x, ~z) uniquely.
Note that our definition makes no reference to Turing machines, or algo-
rithms, or any specific computational model. But like composition and prim-
itive recursion, there is an operational, computational intuition behind un-
bounded search. When it comes to the computability of a partial function,
arguments where the function is undefined correspond to inputs for which
the computation does not halt. The procedure for computing µx f ( x, ~z) will
amount to this: compute f (0, ~z), f (1, ~z), f (2, ~z) until a value of 0 is returned. If
any of the intermediate computations do not halt, however, neither does the
computation of µx f ( x, ~z).
If R( x, ~z) is any relation, µx R( x, ~z) is defined to be µx (1 −̇ χ R ( x, ~z)). In
other words, µx R( x, ~z) returns the least value of x such that R( x, ~z) holds. So,
if f ( x, ~z) is a total function, µx f ( x, ~z) is the same as µx ( f ( x, ~z) = 0). But note

Release: (None) ((None)) 341


CHAPTER 25. RECURSIVE FUNCTIONS

that our original definition is more general, since it allows for the possibility
that f ( x, ~z) is not everywhere defined (whereas, in contrast, the characteristic
function of a relation is always total).

Definition 25.6. The set of partial recursive functions is the smallest set of partial
functions from the natural numbers to the natural numbers (of various arities)
containing zero, successor, and projections, and closed under composition,
primitive recursion, and unbounded search.

Of course, some of the partial recursive functions will happen to be total,


i.e., defined for every argument.

Definition 25.7. The set of recursive functions is the set of partial recursive
functions that are total.

A recursive function is sometimes called “total recursive” to emphasize


that it is defined everywhere.

25.12 The Normal Form Theorem


Theorem 25.8 (Kleene’s Normal Form Theorem). There is a primitive recursive
relation T (e, x, s) and a primitive recursive function U (s), with the following prop-
erty: if f is any partial recursive function, then for some e,

f ( x ) ' U (µs T (e, x, s))

for every x.

The proof of the normal form theorem is involved, but the basic idea is
simple. Every partial recursive function has an index e, intuitively, a number
coding its program or definition. If f ( x ) ↓, the computation can be recorded
systematically and coded by some number s, and that s codes the computation
of f on input x can be checked primitive recursively using only x and the
definition e. This means that T is primitive recursive. Given the full record of
the computation s, the “upshot” of s is the value of f ( x ), and it can be obtained
from s primitive recursively as well.
The normal form theorem shows that only a single unbounded search is
required for the definition of any partial recursive function. We can use the
numbers e as “names” of partial recursive functions, and write ϕe for the func-
tion f defined by the equation in the theorem. Note that any partial recursive
function can have more than one index—in fact, every partial recursive func-
tion has infinitely many indices.

342 Release: (None) ((None))


25.13. THE HALTING PROBLEM

25.13 The Halting Problem


The halting problem in general is the problem of deciding, given the specifica-
tion e (e.g., program) of a computable function and a number n, whether the
computation of the function on input n halts, i.e., produces a result. Famously,
Alan Turing proved that this problem itself cannot be solved by a computable
function, i.e., the function
(
1 if computation e halts on input n
h(e, n) =
0 otherwise,

is not computable.
In the context of partial recursive functions, the role of the specification
of a program may be played by the index e given in Kleene’s normal form
theorem. If f is a partial recursive function, any e for which the equation in
the normal form theorem holds, is an index of f . Given a number e, the normal
form theorem states that

ϕe ( x ) ' U (µs T (e, x, s))

is partial recursive, and for every partial recursive f : N → N, there is an


e ∈ N such that ϕe ( x ) ' f ( x ) for all x ∈ N. In fact, for each such f there is
not just one, but infinitely many such e. The halting function h is defined by
(
1 if ϕe ( x ) ↓
h(e, x ) =
0 otherwise.

Note that h(e, x ) = 0 if ϕe ( x ) ↑, but also when e is not the index of a partial
recursive function at all.

Theorem 25.9. The halting function h is not partial recursive.

Proof. If h were partial recursive, we could define


(
1 if h(y, y) = 0
d(y) =
µx x 6= x otherwise.

From this definition it follows that

1. d(y) ↓ iff ϕy (y) ↑ or y is not the index of a partial recursive function.

2. d(y) ↑ iff ϕy (y) ↓.

If h were partial recursive, then d would be partial recursive as well. Thus,


by the Kleene normal form theorem, it has an index ed . Consider the value of
h(ed , ed ). There are two possible cases, 0 and 1.

Release: (None) ((None)) 343


CHAPTER 25. RECURSIVE FUNCTIONS

1. If h(ed , ed ) = 1 then ϕed (ed ) ↓. But ϕed ' d, and d(ed ) is defined iff
h(ed , ed ) = 0. So h(ed , ed ) 6= 1.
2. If h(ed , ed ) = 0 then either ed is not the index of a partial recursive func-
tion, or it is and ϕed (ed ) ↑. But again, ϕed ' d, and d(ed ) is undefined iff
ϕ ed ( e d ) ↓.
The upshot is that ed cannot, after all, be the index of a partial recursive func-
tion. But if h were partial recursive, d would be too, and so our definition of
ed as an index of it would be admissible. We must conclude that h cannot be
partial recursive.

25.14 General Recursive Functions


There is another way to obtain a set of total functions. Say a total function
f ( x, ~z) is regular if for every sequence of natural numbers ~z, there is an x such
that f ( x, ~z) = 0. In other words, the regular functions are exactly those func-
tions to which one can apply unbounded search, and end up with a total func-
tion. One can, conservatively, restrict unbounded search to regular functions:
Definition 25.10. The set of general recursive functions is the smallest set of
functions from the natural numbers to the natural numbers (of various arities)
containing zero, successor, and projections, and closed under composition,
primitive recursion, and unbounded search applied to regular functions.
Clearly every general recursive function is total. The difference between
?? and ?? is that in the latter one is allowed to use partial recursive functions
along the way; the only requirement is that the function you end up with at
the end is total. So the word “general,” a historic relic, is a misnomer; on the
surface, ?? is less general than ??. But, fortunately, the difference is illusory;
though the definitions are different, the set of general recursive functions and
the set of recursive functions are one and the same.

Problems
Problem 25.1. Multiplication satisfies the recursive equations

0·y = y
( x + 1) · y = ( x · y ) + x
Give the explicit precise definition of the function mult( x, y) = x · y, assuming
that add( x, y) = x + y is already defined. Give the complete notation for mult.
Problem 25.2. Show that
2x

..
2.
f ( x, y) = 2 y 2’s

344 Release: (None) ((None))


25.14. GENERAL RECURSIVE FUNCTIONS

is primitive recursive.

Problem 25.3. Show that d( x, y) = b x/yc (i.e., division, where you disregard
everything after the decimal point) is primitive recursive. When y = 0, we
stipulate d( x, y) = 0. Give an explicit definifion of d using primitive recursion
and composition. You will have detour through an axiliary function—you
cannot use recursion on the arguments x or y themselves.

Problem 25.4. Suppose R( x, ~z) is primitive recursive. Define the function


m0R (y, ~z) which returns the least x less than y such that R( x, ~z) holds, if there
is one, and y + 1 otherwise, by primitive recursion from χ R .

Problem 25.5. Define integer division d( x, y) using bounded minimization.

Problem 25.6. Show that there is a primitive recursive function sconcat(s)


with the property that

sconcat(hs0 , . . . , sk i) = s0 _ . . . . . . _ sk .

Release: (None) ((None)) 345


Chapter 26

The Lambda Calculus

This chapter needs to be expanded (issue #66).

26.1 Introduction
The lambda calculus was originally designed by Alonzo Church in the early
1930s as a basis for constructive logic, and not as a model of the computable
functions. But soon after the Turing computable functions, the recursive func-
tions, and the general recursive functions were shown to be equivalent, lambda
computability was added to the list. The fact that this initially came as a small
surprise makes the characterization all the more interesting.
Lambda notation is a convenient way of referring to a function directly
by a symbolic expression which defines it, instead of defining a name for it.
Instead of saying “let f be the function defined by f ( x ) = x + 3,” one can
say, “let f be the function λx. ( x + 3).” In other words, λx. ( x + 3) is just a
name for the function that adds three to its argument. In this expression, x
is a dummy variable, or a placeholder: the same function can just as well
be denoted by λy. (y + 3). The notation works even with other parameters
around. For example, suppose g( x, y) is a function of two variables, and k is a
natural number. Then λx. g( x, k) is the function which maps any x to g( x, k).
This way of defining a function from a symbolic expression is known as
lambda abstraction. The flip side of lambda abstraction is application: assuming
one has a function f (say, defined on the natural numbers), one can apply it to
any value, like 2. In conventional notation, of course, we write f (2) for the
result.
What happens when you combine lambda abstraction with application?
Then the resulting expression can be simplified, by “plugging” the applicand
in for the abstracted variable. For example,
(λx. ( x + 3))(2)

346
26.2. THE SYNTAX OF THE LAMBDA CALCULUS

can be simplified to 2 + 3.
Up to this point, we have done nothing but introduce new notations for
conventional notions. The lambda calculus, however, represents a more radi-
cal departure from the set-theoretic viewpoint. In this framework:

1. Everything denotes a function.

2. Functions can be defined using lambda abstraction.

3. Anything can be applied to anything else.

For example, if F is a term in the lambda calculus, F ( F ) is always assumed


to be meaningful. This liberal framework is known as the untyped lambda
calculus, where “untyped” means “no restriction on what can be applied to
what.”
There is also a typed lambda calculus, which is an important variation on
the untyped version. Although in many ways the typed lambda calculus is
similar to the untyped one, it is much easier to reconcile with a classical set-
theoretic framework, and has some very different properties.
Research on the lambda calculus has proved to be central in theoretical
computer science, and in the design of programming languages. LISP, de-
signed by John McCarthy in the 1950s, is an early example of a language that
was influenced by these ideas.

26.2 The Syntax of the Lambda Calculus


One starts with a sequence of variables x, y, z, . . . and some constant symbols
a, b, c, . . . . The set of terms is defined inductively, as follows:

1. Each variable is a term.

2. Each constant is a term.

3. If M and N are terms, so is ( MN ).

4. If M is a term and x is a variable, then (λx. M ) is a term.

The system without any constants at all is called the pure lambda calculus.
We will follow a few notational conventions:

1. When parentheses are left out, application takes place from left to right.
For example, if M, N, P, and Q are terms, then MNPQ abbreviates
((( MN ) P) Q).

2. Again, when parentheses are left out, lambda abstraction is to be given


the widest scope possible. From example, λx. MNP is read λx. ( MNP).

Release: (None) ((None)) 347


CHAPTER 26. THE LAMBDA CALCULUS

3. A lambda can be used to abstract multiple variables. For example, λxyz. M


is short for λx. λy. λz. M.
For example,
λxy. xxyxλz. xz
abbreviates
λx. λy. (((( xx )y) x )λz. ( xz)).
You should memorize these conventions. They will drive you crazy at first,
but you will get used to them, and after a while they will drive you less crazy
than having to deal with a morass of parentheses.
Two terms that differ only in the names of the bound variables are called α-
equivalent; for example, λx. x and λy. y. It will be convenient to think of these
as being the “same” term; in other words, when we say that M and N are the
same, we also mean “up to renamings of the bound variables.” Variables that
are in the scope of a λ are called “bound”, while others are called “free.” There
are no free variables in the previous example; but in
(λz. yz) x
y and x are free, and z is bound.

26.3 Reduction of Lambda Terms


What can one do with lambda terms? Simplify them. If M and N are any
lambda terms and x is any variable, we can use M[ N/x ] to denote the result
of substituting N for x in M, after renaming any bound variables of M that
would interfere with the free variables of N after the substitution. For exam-
ple,
(λw. xxw)[yyz/x ] = λw. (yyz)(yyz)w.
Alternative notations for substitution are [ N/x ] M, M [ N/x ], and also M [ x/N ].
Beware!
Intuitively, (λx. M) N and M [ N/x ] have the same meaning; the act of re-
placing the first term by the second is called β-conversion. More generally,
if it is possible convert a term P to P0 by β-conversion of some subterm, one
says P β-reduces to P0 in one step. If P can be converted to P0 with any num-
ber of one-step reductions (possibly none), then P β-reduces to P0 . A term that
cannot be β-reduced any further is called β-irreducible, or β-normal. I will say
“reduces” instead of “β-reduces,” etc., when the context is clear.
Let us consider some examples.
1. We have
(λx. xxy)λz. z .1 (λz. z)(λz. z)y
.1 (λz. z)y
.1 y

348 Release: (None) ((None))


26.4. THE CHURCH-ROSSER PROPERTY

2. “Simplifying” a term can make it more complex:

(λx. xxy)(λx. xxy) .1 (λx. xxy)(λx. xxy)y


.1 (λx. xxy)(λx. xxy)yy
.1 . . .

3. It can also leave a term unchanged:

(λx. xx )(λx. xx ) .1 (λx. xx )(λx. xx )

4. Also, some terms can be reduced in more than one way; for example,

(λx. (λy. yx )z)v .1 (λy. yv)z

by contracting the outermost application; and

(λx. (λy. yx )z)v .1 (λx. zx )v

by contracting the innermost one. Note, in this case, however, that both
terms further reduce to the same term, zv.

The final outcome in the last example is not a coincidence, but rather il-
lustrates a deep and important property of the lambda calculus, known as the
“Church-Rosser property.”

26.4 The Church-Rosser Property


Theorem 26.1. Let M, N1 , and N2 be terms, such that M . N1 and M . N2 . Then
there is a term P such that N1 . P and N2 . P.

Corollary 26.2. Suppose M can be reduced to normal form. Then this normal form
is unique.

Proof. If M . N1 and M . N2 , by the previous theorem there is a term P such


that N1 and N2 both reduce to P. If N1 and N2 are both in normal form, this
can only happen if N1 = P = N2 .

Finally, we will say that two terms M and N are β-equivalent, or just equiv-
alent, if they reduce to a common term; in other words, if there is some P such
that M . P and N . P. This is written M ≡ N. Using ??, you can check that ≡ is
an equivalence relation, with the additional property that for every M and N,
if M . N or N . M, then M ≡ N. (In fact, one can show that ≡ is the smallest
equivalence relation having this property.)

Release: (None) ((None)) 349


CHAPTER 26. THE LAMBDA CALCULUS

26.5 Representability by Lambda Terms


How can the lambda calculus serve as a model of computation? At first, it is
not even clear how to make sense of this statement. To talk about computabil-
ity on the natural numbers, we need to find a suitable representation for such
numbers. Here is one that works surprisingly well.

Definition 26.3. For each natural number n, define the numeral n to be the
lambda term λx. λy. ( x ( x ( x (. . . x (y))))), where there are n x’s in all.

The terms n are “iterators”: on input f , n returns the function mapping y


to f n (y). Note that each numeral is normal. We can now say what it means
for a lambda term to “compute” a function on the natural numbers.

Definition 26.4. Let f ( x0 , . . . , xn−1 ) be an n-ary partial function from N to N.


We say a lambda term X represents f if for every sequence of natural numbers
m 0 , . . . , m n −1 ,
Xm0 m1 . . . mn−1 . f (m0 , m1 , . . . , mn−1 )
if f (m0 , . . . , mn−1 ) is defined, and Xm0 m1 . . . mn−1 has no normal form other-
wise.

Theorem 26.5. A function f is a partial computable function if and only if it is


represented by a lambda term.

This theorem is somewhat striking. As a model of computation, the lambda


calculus is a rather simple calculus; the only operations are lambda abstrac-
tion and application! From these meager resources, however, it is possible to
implement any computational procedure.

26.6 Lambda Representable Functions are Computable


Theorem 26.6. If a partial function f is represented by a lambda term, it is com-
putable.

Proof. Suppose a function f , is represented by a lambda term X. Let us de-


scribe an informal procedure to compute f . On input m0 , . . . , mn−1 , write
down the term Xm0 . . . mn−1 . Build a tree, first writing down all the one-step
reductions of the original term; below that, write all the one-step reductions
of those (i.e., the two-step reductions of the original term); and keep going. If
you ever reach a numeral, return that as the answer; otherwise, the function
is undefined.
An appeal to Church’s thesis tells us that this function is computable. A
better way to prove the theorem would be to give a recursive description of
this search procedure. For example, one could define a sequence primitive re-
cursive functions and relations, “IsASubterm,” “Substitute,” “ReducesToInOneStep,”

350 Release: (None) ((None))


26.7. COMPUTABLE FUNCTIONS ARE LAMBDA REPRESENTABLE

“ReductionSequence,” “Numeral,” etc. The partial recursive procedure for


computing f (m0 , . . . , mn−1 ) is then to search for a sequence of one-step reduc-
tions starting with Xm0 . . . mn−1 and ending with a numeral, and return the
number corresponding to that numeral. The details are long and tedious but
otherwise routine.

26.7 Computable Functions are Lambda Representable


Theorem 26.7. Every computable partial function if representable by a lambda term.

Proof. Wwe need to show that every partial computable function f is rep-
resented by a lambda term f . By Kleene’s normal form theorem, it suffices
to show that every primitive recursive function is represented by a lambda
term, and then that the functions so represented are closed under suitable
compositions and unbounded search. To show that every primitive recursive
function is represented by a lambda term, it suffices to show that the initial
functions are represented, and that the partial functions that are represented
by lambda terms are closed under composition, primitive recursion, and un-
bounded search.

We will use a more conventional notation to make the rest of the proof
more readable. For example, we will write M( x, y, z) instead of Mxyz. While
this is suggestive, you should remember that terms in the untyped lambda
calculus do not have associated arities; so, for the same term M, it makes just
as much sense to write M ( x, y) and M( x, y, z, w). But using this notation indi-
cates that we are treating M as a function of three variables, and helps make
the intentions behind the definitions clearer. In a similar way, we will say
“define M by M ( x, y, z) = . . . ” instead of “define M by M = λx. λy. λz. . . ..”

26.8 The Basic Primitive Recursive Functions are Lambda


Representable
Lemma 26.8. The functions 0, S, and Pin are lambda representable.

Proof. Zero, 0, is just λx. λy. y.


The successor function S, is defined by S(u) = λx. λy. x (uxy). You should
think about why this works; for each numeral n, thought of as an iterator, and
each function f , S(n, f ) is a function that, on input y, applies f n times starting
with y, and then applies it once more.
There is nothing to say about projections: Pin ( x0 , . . . , xn−1 ) = xi . In other
words, by our conventions, Pin is the lambda term λx0 . . . . λxn−1 . xi .

Release: (None) ((None)) 351


CHAPTER 26. THE LAMBDA CALCULUS

26.9 Lambda Representable Functions Closed under


Composition
Lemma 26.9. The lambda representable functions are closed under composition.

Proof. Suppose f is defined by composition from h, g0 , . . . , gk−1 . Assuming h,


g0 , . . . , gk−1 are represented by h, g0 , . . . , gk−1 , respectively, we need to find a
term f representing f . But we can simply define f by

f ( x0 , . . . , xl −1 ) = h( g0 ( x0 , . . . , xl −1 ), . . . , gk−1 ( x0 , . . . , xl −1 )).

In other words, the language of the lambda calculus is well suited to represent
composition.

26.10 Lambda Representable Functions Closed under


Primitive Recursion
When it comes to primitive recursion, we finally need to do some work. We
will have to proceed in stages. As before, on the assumption that we already
have terms g and h representing functions g and h, respectively, we want a
term f representing the function f defined by

f (0, ~z) = g(~z)


f ( x + 1, ~z) = h(z, f ( x, ~z), ~z).

So, in general, given lambda terms G 0 and H 0 , it suffices to find a term F such
that

F (0, ~z) ≡ G 0 (~z)


F (n + 1, ~z) ≡ H 0 (n, F (n, ~z), ~z)

for every natural number n; the fact that G 0 and H 0 represent g and h means
that whenever we plug in numerals m ~ for ~z, F (n + 1, m
~ ) will normalize to the
right answer.
But for this, it suffices to find a term F satisfying

F (0) ≡ G
F (n + 1) ≡ H (n, F (n))

for every natural number n, where

G = λ~z. G 0 (~z) and


H (u, v) = λ~z. H 0 (u, v(u, ~z), ~z).

352 Release: (None) ((None))


26.10. LAMBDA REPRESENTABLE FUNCTIONS CLOSED UNDER
PRIMITIVE RECURSION
In other words, with lambda trickery, we can avoid having to worry about the
extra parameters ~z—they just get absorbed in the lambda notation.
Before we define the term F, we need a mechanism for handling ordered
pairs. This is provided by the next lemma.
Lemma 26.10. There is a lambda term D such that for each pair of lambda terms M
and N, D ( M, N )(0) . M and D ( M, N )(1) . N.

Proof. First, define the lambda term K by

K (y) = λx. y.

In other words, K is the term λy. λx. y. Looking at it differently, for every M,
K ( M ) is a constant function that returns M on any input.
Now define D ( x, y, z) by D ( x, y, z) = z(K (y)) x. Then we have

D ( M, N, 0) . 0(K ( N )) M . M and
D ( M, N, 1) . 1(K ( N )) M . K ( N ) M . N,

as required.

The idea is that D ( M, N ) represents the pair h M, N i, and if P is assumed


to represent such a pair, P(0) and P(1) represent the left and right projections,
( P)0 and ( P)1 . We will use the latter notations.
Lemma 26.11. The lambda representable functions are closed under primitive recur-
sion.

Proof. We need to show that given any terms, G and H, we can find a term F
such that

F (0) ≡ G
F (n + 1) ≡ H (n, F (n))

for every natural number n. The idea is roughly to compute sequences of pairs

h0, F (0)i, h1, F (1)i, . . . ,


using numerals as iterators. Notice that the first pair is just h0, G i. Given a
pair hn, F (n)i, the next pair, hn + 1, F (n + 1)i is supposed to be equivalent to
hn + 1, H (n, F (n))i. We will design a lambda term T that makes this one-step
transition.
The details are as follows. Define T (u) by

T (u) = hS((u)0 ), H ((u)0 , (u)1 )i.

Now it is easy to verify that for any number n,

T (hn, M i) . hn + 1, H (n, M )i.

Release: (None) ((None)) 353


CHAPTER 26. THE LAMBDA CALCULUS

As suggested above, given G and H, define F (u) by

F (u) = (u( T, h0, G i))1 .

In other words, on input n, F iterates T n times on h0, G i, and then returns the
second component. To start with, we have
1. 0( T, h0, G i) ≡ h0, G i
2. F (0) ≡ G
By induction on n, we can show that for each natural number one has the
following:
1. n + 1( T, h0, G i) ≡ hn + 1, F (n + 1)i
2. F (n + 1) ≡ H (n, F (n))
For the second clause, we have

F (n + 1) . (n + 1( T, h0, G i))1
≡ ( T (n( T, h0, G i)))1
≡ ( T (hn, F (n)i))1
≡ (hn + 1, H (n, F (n))i)1
≡ H (n, F (n)).

Here we have used the induction hypothesis on the second-to-last line. For
the first clause, we have

n + 1( T, h0, G i) ≡ T (n( T, h0, G i))


≡ T (hn, F (n)i)
≡ hn + 1, H (n, F (n))i
≡ hn + 1, F (n + 1)i.

Here we have used the second clause in the last line. So we have shown
F (0) ≡ G and, for every n, F (n + 1) ≡ H (n, F (n)), which is exactly what
we needed.

26.11 Fixed-Point Combinators


Suppose you have a lambda term g, and you want another term k with the
property that k is β-equivalent to gk. Define terms

diag( x ) = xx

and
l ( x ) = g(diag( x ))

354 Release: (None) ((None))


26.12. LAMBDA REPRESENTABLE FUNCTIONS CLOSED UNDER
MINIMIZATION
using our notational conventions; in other words, l is the term λx. g( xx ). Let
k be the term ll. Then we have

k = (λx. g( xx ))(λx. g( xx ))
. g((λx. g( xx ))(λx. g( xx )))
= gk.

If one takes
Y = λg. ((λx. g( xx ))(λx. g( xx )))
then Yg and g(Yg) reduce to a common term; so Yg ≡ β g(Yg). This is known
as “Curry’s combinator.” If instead one takes

Y = (λxg. g( xxg))(λxg. g( xxg))

then in fact Yg reduces to g(Yg), which is a stronger statement. This latter


version of Y is known as “Turing’s combinator.”

26.12 Lambda Representable Functions Closed under


Minimization
Lemma 26.12. Suppose f ( x, y) is primitive recursive. Let g be defined by

g( x ) ' µy f ( x, y).

Then g is represented by a lambda term.

Proof. The idea is roughly as follows. Given x, we will use the fixed-point
lambda term Y to define a function h x (n) which searches for a y starting at n;
then g( x ) is just h x (0). The function h x can be expressed as the solution of a
fixed-point equation:
(
n if f ( x, n) = 0
h x (n) '
h x (n + 1) otherwise.

Here are the details. Since f is primitive recursive, it is represented by


some term F. Remember that we also have a lambda term D, such that D ( M, N, 0̄) .
M and D ( M, N, 1̄) . N. Fixing x for the moment, to represent h x we want to
find a term H (depending on x) satisfying

H (n) ≡ D (n, H (S(n)), F ( x, n)).

We can do this using the fixed-point term Y. First, let U be the term

λh. λz. D (z, (h(Sz)), F ( x, z)),

Release: (None) ((None)) 355


CHAPTER 26. THE LAMBDA CALCULUS

and then let H be the term YU. Notice that the only free variable in H is x. Let
us show that H satisfies the equation above.
By the definition of Y, we have

H = YU ≡ U (YU ) = U ( H ).

In particular, for each natural number n, we have

H (n) ≡ U ( H, n)
. D (n, H (S(n)), F ( x, n)),

as required. Notice that if you substitute a numeral m for x in the last line, the
expression reduces to n if F (m, n) reduces to 0, and it reduces to H (S(n)) if
F (m, n) reduces to any other numeral.
To finish off the proof, let G be λx. H (0). Then G represents g; in other
words, for every m, G (m) reduces to reduces to g(m), if g(m) is defined, and
has no normal form otherwise.

356 Release: (None) ((None))


Chapter 27

Computability Theory

Material in this chapter should be reviewed and expanded. In paticu-


lar, there are no exercises yet.

27.1 Introduction
The branch of logic known as Computability Theory deals with issues having to
do with the computability, or relative computability, of functions and sets. It is
a evidence of Kleene’s influence that the subject used to be known as Recursion
Theory, and today, both names are commonly used.
Let us call a function f : N → 7 N partial computable if it can be computed
in some model of computation. If f is total we will simply say that f is com-
putable. A relation R with computable characteristic function χ R is also called
computable. If f and g are partial functions, we will write f ( x ) ↓ to mean that
f is defined at x, i.e., x is in the domain of f ; and f ( x ) ↑ to mean the opposite,
i.e., that f is not defined at x. We will use f ( x ) ' g( x ) to mean that either f ( x )
and g( x ) are both undefined, or they are both defined and equal.
One can explore the subject without having to refer to a specific model
of computation. To do this, one shows that there is a universal partial com-
putable function, Un(k, x ). This allows us to enumerate the partial computable
functions. We will adopt the notation ϕk to denote the k-th unary partial com-
putable function, defined by ϕk ( x ) ' Un(k, x ). (Kleene used {k} for this pur-
pose, but this notation has not been used as much recently.) Slightly more
generally, we can uniformly enumerate the partial computable functions of
arbitrary arities, and we will use ϕnk to denote the k-th n-ary partial recursive
function.
Recall that if f (~x, y) is a total or partial function, then µy f (~x, y) is the
function of ~x that returns the least y such that f (~x, y) = 0, assuming that all of
f (~x, 0), . . . , f (~x, y − 1) are defined; if there is no such y, µy f (~x, y) is undefined.

357
CHAPTER 27. COMPUTABILITY THEORY

If R(~x, y) is a relation, µy R(~x, y) is defined to be the least y such that R(~x, y) is


true; in other words, the least y such that one minus the characteristic function
of R is equal to zero at ~x, y.
To show that a function is computable, there are two ways one can pro-
ceed:
1. Rigorously: describe a Turing machine or partial recursive function ex-
plicitly, and show that it computes the function you have in mind;

2. Informally: describe an algorithm that computes it, and appeal to Church’s


thesis.
There is no fine line between the two; a detailed description of an algorithm
should provide enough information so that it is relatively clear how one could,
in principle, design the right Turing machine or sequence of partial recursive
definitions. Fully rigorous definitions are unlikely to be informative, and we
will try to find a happy medium between these two approaches; in short, we
will try to find intuitive yet rigorous proofs that the precise definitions could
be obtained.

27.2 Coding Computations


In every model of computation, it is possible to do the following:
1. Describe the definitions of computable functions in a systematic way. For
instance, you can think of Turing machine specifications, recursive def-
initions, or programs in a programming language as providing these
definitions.

2. Describe the complete record of the computation of a function given by


some definition for a given input. For instance, a Turing machine com-
putation can be described by the sequence of configurations (state of the
machine, contents of the tape) for each step of computation.

3. Test whether a putative record of a computation is in fact the record of


how a computable function with a given definition would be computed
for a given input.

4. Extract from such a description of the complete record of a computation


the value of the function for a given input. For instance, the contents of
the tape in the very last step of a halting Turing machine computation is
the value.
Using coding, it is possible to assign to each description of a computable
function a numerical index in such a way that the instructions can be recovered
from the index in a computable way. Similarly, the complete record of a com-
putation can be coded by a single number as well. The resulting arithmetical

358 Release: (None) ((None))


27.3. THE NORMAL FORM THEOREM

relation “s codes the record of computation of the function with index e for
input x” and the function “output of computation sequence with code s” are
then computable; in fact, they are primitive recursive.
This fundamental fact is very powerful, and allows us to prove a number
of striking and important results about computability, independently of the
model of computation chosen.

27.3 The Normal Form Theorem


Theorem 27.1 (Kleene’s Normal Form Theorem). There are a primitive recur-
sive relation T (k, x, s) and a primitive recursive function U (s), with the following
property: if f is any partial computable function, then for some k,

f ( x ) ' U (µs T (k, x, s))

for every x.

Proof Sketch. For any model of computation one can rigorously define a de-
scription of the computable function f and code such description using a nat-
ural number k. One can also rigorously define a notion of “computation se-
quence” which records the process of computing the function with index k for
input x. These computation sequences can likewise be coded as numbers s.
This can be done in such a way that (a) it is decidable whether a number s
codes the computation sequence of the function with index k on input x and
(b) what the end result of the computation sequence coded by s is. In fact, the
relation in (a) and the function in (b) are primitive recursive.

In order to give a rigorous proof of the Normal Form Theorem, we would


have to fix a model of computation and carry out the coding of descriptions of
computable functions and of computation sequences in detail, and verify that
the relation T and function U are primitive recursive. For most applications,
it suffices that T and U are computable and that U is total.
It is probably best to remember the proof of the normal form theorem in
slogan form: µs T (k, x, s) searches for a computation sequence of the function
with index k on input x, and U returns the output of the computation sequence
if one can be found.
T and U can be used to define the enumeration ϕ0 , ϕ1 , ϕ2 , . . . . From now
on, we will assume that we have fixed a suitable choice of T and U, and take
the equation
ϕe ( x ) ' U (µs T (e, x, s))
to be the definition of ϕe .
Here is another useful fact:

Theorem 27.2. Every partial computable function has infinitely many indices.

Release: (None) ((None)) 359


CHAPTER 27. COMPUTABILITY THEORY

Again, this is intuitively clear. Given any (description of) a computable


function, one can come up with a different description which computes the
same function (input-output pair) but does so, e.g., by first doing something
that has no effect on the computation (say, test if 0 = 0, or count to 5, etc.). The
index of the altered description will always be different from the original in-
dex. Both are indices of the same function, just computed slightly differently.

27.4 The s-m-n Theorem


The next theorem is known as the “s-m-n theorem,” for a reason that will be
clear in a moment. The hard part is understanding just what the theorem says;
once you understand the statement, it will seem fairly obvious.
Theorem 27.3. For each pair of natural numbers n and m, there is a primitive re-
cursive function sm
n such that for every sequence x, a0 , . . . , am−1 , y0 ,. . . , yn−1 , we
have

ϕnsmn ( x,a0 ,...,a ( y 0 , . . . , y n −1 ) ' ϕ m


x
+n
( a 0 , . . . , a m −1 , y 0 , . . . , y n −1 ).
m −1 )

It is helpful to think of sm m
n as acting on programs. That is, sn takes a pro-
gram, x, for an (m + n)-ary function, as well as fixed inputs a0 , . . . , am−1 ; and
it returns a program, sm n ( x, a0 , . . . , am−1 ), for the n-ary function of the remain-
ing arguments. It you think of x as the description of a Turing machine, then
sm
n ( x, a0 , . . . , am−1 ) is the Turing machine that, on input y0 , . . . , yn−1 , prepends
a0 , . . . , am−1 to the input string, and runs x. Each sm n is then just a primitive
recursive function that finds a code for the appropriate Turing machine.

27.5 The Universal Partial Computable Function


Theorem 27.4. There is a universal partial computable function Un(k, x ). In other
words, there is a function Un(k, x ) such that:
1. Un(k, x ) is partial computable.

2. If f ( x ) is any partial computable function, then there is a natural number k


such that f ( x ) ' Un(k, x ) for every x.

Proof. Let Un(k, x ) ' U (µs T (k, x, s)) in Kleene’s normal form theorem.

This is just a precise way of saying that we have an effective enumeration


of the partial computable functions; the idea is that if we write f k for the func-
tion defined by f k ( x ) = Un(k, x ), then the sequence f 0 , f 1 , f 2 , . . . includes all
the partial computable functions, with the property that f k ( x ) can be com-
puted “uniformly” in k and x. For simplicity, we am using a binary func-
tion that is universal for unary functions, but by coding sequences of num-
bers we can easily generalize this to more arguments. For example, note that

360 Release: (None) ((None))


27.6. NO UNIVERSAL COMPUTABLE FUNCTION

if f ( x, y, z) is a 3-place partial recursive function, then the function g( x ) '


f (( x )0 , ( x )1 , ( x )2 ) is a unary recursive function.

27.6 No Universal Computable Function


Theorem 27.5. There is no universal computable function. In other words, the uni-
versal function Un0 (k, x ) = ϕk ( x ) is not computable.

Proof. This theorem says that there is no total computable function that is uni-
versal for the total computable functions. The proof is a simple diagonaliza-
tion: if Un0 (k, x ) were total and computable, then
d( x ) = Un0 ( x, x ) + 1
would also be total and computable. However, for every k, d(k ) is not equal
to Un0 (k, k).

Theorem ?? above shows that we can get around this diagonalization ar-
gument, but only at the expense of allowing partial functions. It is worth
trying to understand what goes wrong with the diagonalization argument,
when we try to apply it in the partial case. In particular, the function h( x ) =
Un( x, x ) + 1 is partial recursive. Suppose h is the k-th function in the enumer-
ation; what can we say about h(k )?

27.7 The Halting Problem


Since, in our construction, Un(k, x ) is defined if and only if the computation
of the function coded by k produces a value for input x, it is natural to ask if
we can decide whether this is the case. And in fact, it is not. For the Turing
machine model of computation, this means that whether a given Turing ma-
chine halts on a given input is computationally undecidable. The following
theorem is therefore known as the “undecidability of the halting problem.” I
will provide two proofs below. The first continues the thread of our previous
discussion, while the second is more direct.
Theorem 27.6. Let
(
1 if Un(k, x ) is defined
h(k, x ) =
0 otherwise.
Then h is not computable.

Proof. If h were computable, we would have a universal computable function,


as follows. Suppose h is computable, and define
(
f nUn(k, x ) if h(k, x ) = 1
Un0 (k, x ) =
0 otherwise.

Release: (None) ((None)) 361


CHAPTER 27. COMPUTABILITY THEORY

But now Un0 (k, x ) is a total function, and is computable if h is. For instance,
we could define g using primitive recursion, by

g(0, k, x ) ' 0
g(y + 1, k, x ) ' Un(k, x );

then
Un0 (k, x ) ' g(h(k, x ), k, x ).
And since Un0 (k, x ) agrees with Un(k, x ) wherever the latter is defined, Un0 is
universal for those partial computable functions that happen to be total. But
this contradicts ??.

Proof. Suppose h(k, x ) were computable. Define the function g by


(
0 if h( x, x ) = 0
g( x ) =
undefined otherwise.

The function g is partial computable; for example, one can define it as µy h( x, x ) =


0. So, for some k, g( x ) ' Un(k, x ) for every x. Is g defined at k? If it is, then, by
the definition of g, h(k, k) = 0. By the definition of f , this means that Un(k, k )
is undefined; but by our assumption that g(k) ' Un(k, x ) for every x, this
means that g(k) is undefined, a contradiction. On the other hand, if g(k) is
undefined, then h(k, k ) 6= 0, and so h(k, k) = 1. But this means that Un(k, k ) is
defined, i.e., that g(k ) is defined.

We can describe this argument in terms of Turing machines. Suppose there


were a Turing machine H that took as input a description of a Turing machine
K and an input x, and decided whether or not K halts on input x. Then we
could build another Turing machine G which takes a single input x, calls H to
decide if machine x halts on input x, and does the opposite. In other words, if
H reports that x halts on input x, G goes into an infinite loop, and if H reports
that x doesn’t halt on input x, then G just halts. Does G halt on input G? The
argument above shows that it does if and only if it doesn’t—a contradiction.
So our supposition that there is a such Turing machine H, is false.

27.8 Comparison with Russell’s Paradox


It is instructive to compare and contrast the arguments in this section with
Russell’s paradox:

1. Russell’s paradox: let S = { x : x ∈


/ x }. Then x ∈ S if and only if x ∈
/ S, a
contradiction.
Conclusion: There is no such set S. Assuming the existence of a “set of
all sets” is inconsistent with the other axioms of set theory.

362 Release: (None) ((None))


27.8. COMPARISON WITH RUSSELL’S PARADOX

2. A modification of Russell’s paradox: let F be the “function” from the set


of all functions to {0, 1}, defined by
(
1 if f is in the domain of f , and f ( f ) = 0
F( f ) =
0 otherwise

A similar argument shows that F ( F ) = 0 if and only if F ( F ) = 1, a


contradiction.
Conclusion: F is not a function. The “set of all functions” is too big to be
the domain of a function.

3. The diagonalization argument: let f 0 , f 1 , . . . be the enumeration of the


partial computable functions, and let G : N → {0, 1} be defined by
(
1 if f x ( x ) ↓= 0
G(x) =
0 otherwise

If G is computable, then it is the function f k for some k. But then G (k) =


1 if and only if G (k ) = 0, a contradiction.
Conclusion: G is not computable. Note that according to the axioms of set
theory, G is still a function; there is no paradox here, just a clarification.

That talk of partial functions, computable functions, partial computable


functions, and so on can be confusing. The set of all partial functions from N
to N is a big collection of objects. Some of them are total, some of them are
computable, some are both total and computable, and some are neither. Keep
in mind that when we say “function,” by default, we mean a total function.
Thus we have:

1. computable functions

2. partial computable functions that are not total

3. functions that are not computable

4. partial functions that are neither total nor computable

To sort this out, it might help to draw a big square representing all the partial
functions from N to N, and then mark off two overlapping regions, corre-
sponding to the total functions and the computable partial functions, respec-
tively. It is a good exercise to see if you can describe an object in each of the
resulting regions in the diagram.

Release: (None) ((None)) 363


CHAPTER 27. COMPUTABILITY THEORY

27.9 Computable Sets


We can extend the notion of computability from computable functions to com-
putable sets:
Definition 27.7. Let S be a set of natural numbers. Then S is computable iff its
characteristic function is. In other words, S is computable iff the function
(
1 if x ∈ S
χS ( x ) =
0 otherwise
is computable. Similarly, a relation R( x0 , . . . , xk−1 ) is computable if and only
if its characteristic function is.
Computable sets are also called decidable.
Notice that we now have a number of notions of computability: for partial
functions, for functions, and for sets. Do not get them confused! The Turing
machine computing a partial function returns the output of the function, for
input values at which the function is defined; the Turing machine computing
a set returns either 1 or 0, after deciding whether or not the input value is in
the set or not.

27.10 Computably Enumerable Sets


Definition 27.8. A set is computably enumerable if it is empty or the range of a
computable function.

Historical Remarks Computably enumarable sets are also called recursively


enumerable instead. This is the original terminology, and today both are com-
monly used, as well as the abbreviations “c.e.” and “r.e.”
You should think about what the definition means, and why the termi-
nology is appropriate. The idea is that if S is the range of the computable
function f , then
S = { f (0), f (1), f (2), . . . },
and so f can be seen as “enumerating” the elements of S. Note that according
to the definition, f need not be an increasing function, i.e., the enumeration
need not be in increasing order. In fact, f need not even be injective, so that
the constant function f ( x ) = 0 enumerates the set {0}.
Any computable set is computably enumerable. To see this, suppose S is
computable. If S is empty, then by definition it is computably enumerable.
Otherwise, let a be any element of S. Define f by
(
x if χS ( x ) = 1
f (x) =
a otherwise.
Then f is a computable function, and S is the range of f .

364 Release: (None) ((None))


27.11. DEFINITIONS OF C. E. SETS

27.11 Equivalent Defininitions of Computably Enumerable


Sets
The following gives a number of important equivalent statements of what it
means to be computably enumerable.

Theorem 27.9. Let S be a set of natural numbers. Then the following are equivalent:

1. S is computably enumerable.

2. S is the range of a partial computable function.

3. S is empty or the range of a primitive recursive function.

4. S is the domain of a partial computable function.

The first three clauses say that we can equivalently take any non-empty
computably enumerable set to be enumerated by either a computable func-
tion, a partial computable function, or a primitive recursive function. The
fourth clause tells us that if S is computably enumerable, then for some index
e,
S = { x : ϕe ( x ) ↓}.
In other words, S is the set of inputs on for which the computation of ϕe
halts. For that reason, computably enumerable sets are sometimes called semi-
decidable: if a number is in the set, you eventually get a “yes,” but if it isn’t,
you never get a “no”!

Proof. Since every primitive recursive function is computable and every com-
putable function is partial computable, (3) implies (1) and (1) implies (2).
(Note that if S is empty, S is the range of the partial computable function that
is nowhere defined.) If we show that (2) implies (3), we will have shown the
first three clauses equivalent.
So, suppose S is the range of the partial computable function ϕe . If S is
empty, we are done. Otherwise, let a be any element of S. By Kleene’s normal
form theorem, we can write

ϕe ( x ) = U (µs T (e, x, s)).

In particular, ϕe ( x ) ↓ and = y if and only if there is an s such that T (e, x, s)


and U (s) = y. Define f (z) by
(
U ((z)1 ) if T (e, (z)0 , (z)1 )
f (z) =
a otherwise.

Then f is primitive recursive, because T and U are. Expressed in terms of Tur-


ing machines, if z codes a pair h(z)0 , (z)1 i such that (z)1 is a halting computa-
tion of machine e on input (z)0 , then f returns the output of the computation;

Release: (None) ((None)) 365


CHAPTER 27. COMPUTABILITY THEORY

otherwise, it returns a.We need to show that S is the range of f , i.e., for any
natural number y, y ∈ S if and only if it is in the range of f . In the forwards
direction, suppose y ∈ S. Then y is in the range of ϕe , so for some x and s,
T (e, x, s) and U (s) = y; but then y = f (h x, si). Conversely, suppose y is in the
range of f . Then either y = a, or for some z, T (e, (z)0 , (z)1 ) and U ((z)1 ) = y.
Since, in the latter case, ϕe ( x ) ↓= y, either way, y is in S.
(The notation ϕe ( x ) ↓= y means “ϕe ( x ) is defined and equal to y.” We
could just as well use ϕe ( x ) = y, but the extra arrow is sometimes helpful in
reminding us that we are dealing with a partial function.)
To finish up the proof of ??, it suffices to show that (1) and (4) are equiv-
alent. First, let us show that (1) implies (4). Suppose S is the range of a com-
putable function f , i.e.,

S = {y : for some x, f ( x ) = y}.

Let
g(y) = µx f ( x ) = y.

Then g is a partial computable function, and g(y) is defined if and only if for
some x, f ( x ) = y. In other words, the domain of g is the range of f . Expressed
in terms of Turing machines: given a Turing machine F that enumerates the
elements of S, let G be the Turing machine that semi-decides S by searching
through the outputs of F to see if a given element is in the set.
Finally, to show (4) implies (1), suppose that S is the domain of the partial
computable function ϕe , i.e.,

S = { x : ϕe ( x ) ↓}.

If S is empty, we are done; otherwise, let a be any element of S. Define f by


(
( z )0 if T (e, (z)0 , (z)1 )
f (z) =
a otherwise.

Then, as above, a number x is in the range of f if and only if ϕe ( x ) ↓, i.e., if and


only if x ∈ S. Expressed in terms of Turing machines: given a machine Me that
semi-decides S, enumerate the elements of S by running through all possible
Turing machine computations, and returning the inputs that correspond to
halting computations.

The fourth clause of ?? provides us with a convenient way of enumerating


the computably enumerable sets: for each e, let We denote the domain of ϕe .
Then if A is any computably enumerable set, A = We , for some e.
The following provides yet another characterization of the computably
enumerable sets.

366 Release: (None) ((None))


27.12. UNION AND INTERSECTION OF C.E. SETS

Theorem 27.10. A set S is computably enumerable if and only if there is a com-


putable relation R( x, y) such that

S = { x : ∃y R( x, y)}.

Proof. In the forward direction, suppose S is computably enumerable. Then


for some e, S = We . For this value of e we can write S as

S = { x : ∃y T (e, x, y)}.

In the reverse direction, suppose S = { x : ∃y R( x, y)}. Define f by

f ( x ) ' µy AtomRx, y.

Then f is partial computable, and S is the domain of f .

27.12 Computably Enumerable Sets are Closed under Union


and Intersection
The following theorem gives some closure properties on the set of computably
enumerable sets.

Theorem 27.11. Suppose A and B are computably enumerable. Then so are A ∩ B


and A ∪ B.

Proof. ?? allows us to use various characterizations of the computably enu-


merable sets. By way of illustration, we will provide a few different proofs.
For the first proof, suppose A is enumerated by a computable function f ,
and B is enumerated by a computable function g. Let

h( x ) = µy ( f (y) = x ∨ g(y) = x ) and


j( x ) = µy ( f ((y)0 ) = x ∧ g((y)1 ) = x ).

Then A ∪ B is the domain of h, and A ∩ B is the domain of j.


Here is what is going on, in computational terms: given procedures that
enumerate A and B, we can semi-decide if an element x is in A ∪ B by looking
for x in either enumeration; and we can semi-decide if an element x is in A ∩ B
for looking for x in both enumerations at the same time.
For the second proof, suppose again that A is enumerated by f and B is
enumerated by g. Let
(
f ( x/2) if x is even
k( x) =
g(( x − 1)/2) if x is odd.

Then k enumerates A ∪ B; the idea is that k just alternates between the enumer-
ations offered by f and g. Enumerating A ∩ B is tricker. If A ∩ B is empty, it

Release: (None) ((None)) 367


CHAPTER 27. COMPUTABILITY THEORY

is trivially computably enumerable. Otherwise, let c be any element of A ∩ B,


and define l by
(
f (( x )0 ) if f (( x )0 ) = g(( x )1 )
l (x) =
c otherwise.

In computational terms, l runs through pairs of elements in the enumerations


of f and g, and outputs every match it finds; otherwise, it just stalls by out-
putting c.
For the last proof, suppose A is the domain of the partial function m( x ) and
B is the domain of the partial function n( x ). Then A ∩ B is the domain of the
partial function m( x ) + n( x ).
In computational terms, if A is the set of values for which m halts and B
is the set of values for which n halts, A ∩ B is the set of values for which both
procedures halt.
Expressing A ∪ B as a set of halting values is more difficult, because one
has to simulate m and n in parallel. Let d be an index for m and let e be an
index for n; in other words, m = ϕd and n = ϕe . Then A ∪ B is the domain of
the function
p( x ) = µy ( T (d, x, y) ∨ T (e, x, y)).
In computational terms, on input x, p searches for either a halting compu-
tation for m or a halting computation for n, and halts if it finds either one.

27.13 Computably Enumerable Sets not Closed under


Complement
Suppose A is computably enumerable. Is the complement of A, A = N \
A, necessarily computably enumerable as well? The following theorem and
corollary show that the answer is “no.”

Theorem 27.12. Let A be any set of natural numbers. Then A is computable if and
only if both A and A are computably enumerable.

Proof. The forwards direction is easy: if A is computable, then A is com-


putable as well (χ A = 1 −̇ χ A ), and so both are computably enumerable.
In the other direction, suppose A and A are both computably enumerable.
Let A be the domain of ϕd , and let A be the domain of ϕe . Define h by

h( x ) = µs ( T (d, x, s) ∨ T (e, x, s)).

In other words, on input x, h searches for either a halting computation of ϕd


or a halting computation of ϕe . Now, if x ∈ A, it will succeed in the first case,
and if x ∈ A, it will succeed in the second case. So, h is a total computable

368 Release: (None) ((None))


27.14. REDUCIBILITY

function. But now we have that for every x, x ∈ A if and only if T (e, x, h( x )),
i.e., if ϕe is the one that is defined. Since T (e, x, h( x )) is a computable relation,
A is computable.

It is easier to understand what is going on in informal computational terms:


to decide A, on input x search for halting computations of ϕe and ϕ f . One of
them is bound to halt; if it is ϕe , then x is in A, and otherwise, x is in A.

Corollary 27.13. K0 is not computably enumerable.

Proof. We know that K0 is computably enumerable, but not computable. If K0


were computably enumerable, then K0 would be computable by ??.

27.14 Reducibility
We now know that there is at least one set, K0 , that is computably enumerable
but not computable. It should be clear that there are others. The method of
reducibility provides a powerful method of showing that other sets have these
properties, without constantly having to return to first principles.
Generally speaking, a “reduction” of a set A to a set B is a method of
transforming answers to whether or not elements are in B into answers as
to whether or not elements are in A. We will focus on a notion called “many-
one reducibility,” but there are many other notions of reducibility available,
with varying properties. Notions of reducibility are also central to the study
of computational complexity, where efficiency issues have to be considered as
well. For example, a set is said to be “NP-complete” if it is in NP and every
NP problem can be reduced to it, using a notion of reduction that is similar to
the one described below, only with the added requirement that the reduction
can be computed in polynomial time.
We have already used this notion implicitly. Define the set K by

K = { x : ϕ x ( x ) ↓},

i.e., K = { x : x ∈ Wx }. Our proof that the halting problem in unsolvable, ??,


shows most directly that K is not computable. Recall that K0 is the set

K0 = {he, x i : ϕe ( x ) ↓}.

i.e. K0 = {h x, ei : x ∈ We }. It is easy to extend any proof of the uncom-


putability of K to the uncomputability of K0 : if K0 were computable, we could
decide whether or not an element x is in K simply by asking whether or not
the pair h x, x i is in K0 . The function f which maps x to h x, x i is an example of
a reduction of K to K0 .

Release: (None) ((None)) 369


CHAPTER 27. COMPUTABILITY THEORY

Definition 27.14. Let A and B be sets. Then A is said to be many-one reducible


to B, written A ≤m B, if there is a computable function f such that for every
natural number x,

x∈A if and only if f ( x ) ∈ B.

If A is many-one reducible to B and vice-versa, then A and B are said to be


many-one equivalent, written A ≡m B.

If the function f in the definition above happens to be injective, A is said


to be one-one reducible to B. Most of the reductions described below meet this
stronger requirement, but we will not use this fact.
It is true, but by no means obvious, that one-one reducibility really is a
stronger requirement than many-one reducibility. In other words, there are
infinite sets A and B such that A is many-one reducible to B but not one-one
reducible to B.

27.15 Properties of Reducibility


The intuition behind writing A ≤m B is that A is “no harder than” B. The
following two propositions support this intuition.

Proposition 27.15. If A ≤m B and B ≤m C, then A ≤m C.

Proof. Composing a reduction of A to B with a reduction of B to C yields a


reduction of A to C. (You should check the details!)

Proposition 27.16. Let A and B be any sets, and suppose A is many-one reducible
to B.

1. If B is computably enumerable, so is A.

2. If B is computable, so is A.

Proof. Let f be a many-one reduction from A to B. For the first claim, just
check that if B is the domain of a partial function g, then A is the domain
of g ◦ f :

x ∈ Aiff f ( x ) ∈ B
iff g( f ( x )) ↓ .

For the second claim, remember that if B is computable then B and B are
computably enumerable. It is not hard to check that f is also a many-one
reduction of A to B, so, by the first part of this proof, A and A are computably
enumerable. So A is computable as well. (Alternatively, you can check that
χ A = χ B ◦ f ; so if χ B is computable, then so is χ A .)

370 Release: (None) ((None))


27.16. COMPLETE COMPUTABLY ENUMERABLE SETS

A more general notion of reducibility called Turing reducibility is useful


in other contexts, especially for proving undecidability results. Note that by
??, the complement of K0 is not reducible to K0 , since it is not computably
enumerable. But, intuitively, if you knew the answers to questions about K0 ,
you would know the answer to questions about its complement as well. A set
A is said to be Turing reducible to B if one can determine answers to questions
in A using a computable procedure that can ask questions about B. This is
more liberal than many-one reducibility, in which (1) you are only allowed to
ask one question about B, and (2) a “yes” answer has to translate to a “yes”
answer to the question about A, and similarly for “no.” It is still the case
that if A is Turing reducible to B and B is computable then A is computable
as well (though, as we have seen, the analogous statement does not hold for
computable enumerability).
You should think about the various notions of reducibility we have dis-
cussed, and understand the distinctions between them. We will, however,
only deal with many-one reducibility in this chapter. Incidentally, both types
of reducibility discussed in the last paragraph have analogues in computa-
tional complexity, with the added requirement that the Turing machines run in
polynomial time: the complexity version of many-one reducibility is known as
Karp reducibility, while the complexity version of Turing reducibility is known
as Cook reducibility.

27.16 Complete Computably Enumerable Sets


Definition 27.17. A set A is a complete computably enumerable set (under many-
one reducibility) if
1. A is computably enumerable, and
2. for any other computably enumerable set B, B ≤m A.
In other words, complete computably enumerable sets are the “hardest”
computably enumerable sets possible; they allow one to answer questions
about any computably enumerable set.
Theorem 27.18. K, K0 , and K1 are all complete computably enumerable sets.

Proof. To see that K0 is complete, let B be any computably enumerable set.


Then for some index e,
B = We = { x : ϕe ( x ) ↓}.
Let f be the function f ( x ) = he, x i. Then for every natural number x, x ∈ B if
and only if f ( x ) ∈ K0 . In other words, f reduces B to K0 .
To see that K1 is complete, note that in the proof of ?? we reduced K0 to it.
So, by ??, any computably enumerable set can be reduced to K1 as well.
K can be reduced to K0 in much the same way.

Release: (None) ((None)) 371


CHAPTER 27. COMPUTABILITY THEORY

So, it turns out that all the examples of computably enumerable sets that
we have considered so far are either computable, or complete. This should
seem strange! Are there any examples of computably enumerable sets that
are neither computable nor complete? The answer is yes, but it wasn’t until
the middle of the 1950s that this was established by Friedberg and Muchnik,
independently.

27.17 An Example of Reducibility


Let us consider an application of ??.

Proposition 27.19. Let


K1 = {e : ϕe (0) ↓}.
Then K1 is computably enumerable but not computable.

Proof. Since K1 = {e : ∃s T (e, 0, s)}, K1 is computably enumerable by ??.


To show that K1 is not computable, let us show that K0 is reducible to it.
This is a little bit tricky, since using K1 we can only ask questions about
computations that start with a particular input, 0. Suppose you have a smart
friend who can answer questions of this type (friends like this are known as
“oracles”). Then suppose someone comes up to you and asks you whether
or not he, x i is in K0 , that is, whether or not machine e halts on input x. One
thing you can do is build another machine, ex , that, for any input, ignores that
input and instead runs e on input x. Then clearly the question as to whether
machine e halts on input x is equivalent to the question as to whether machine
ex halts on input 0 (or any other input). So, then you ask your friend whether
this new machine, ex , halts on input 0; your friend’s answer to the modified
question provides the answer to the original one. This provides the desired
reduction of K0 to K1 .
Using the universal partial computable function, let f be the 3-ary function
defined by
f ( x, y, z) ' ϕ x (y).
Note that f ignores its third input entirely. Pick an index e such that f = ϕ3e ;
so we have
ϕ3e ( x, y, z) ' ϕ x (y).
By the s-m-n theorem, there is a function s(e, x, y) such that, for every z,

ϕs(e,x,y) (z) ' ϕ3e ( x, y, z)


' ϕ x ( y ).

In terms of the informal argument above, s(e, x, y) is an index for the ma-
chine that, for any input z, ignores that input and computes ϕ x (y).

372 Release: (None) ((None))


27.18. TOTALITY IS UNDECIDABLE

In particular, we have

ϕs(e,x,y) (0) ↓ if and only if ϕ x (y) ↓ .

In other words, h x, yi ∈ K0 if and only if s(e, x, y) ∈ K1 . So the function g


defined by
g(w) = s(e, (w)0 , (w)1 )

is a reduction of K0 to K1 .

27.18 Totality is Undecidable


Let us consider one more example of using the s-m-n theorem to show that
something is noncomputable. Let Tot be the set of indices of total computable
functions, i.e.
Tot = { x : for every y, ϕ x (y) ↓}.

Proposition 27.20. Tot is not computable.

Proof. To see that Tot is not computable, it suffices to show that K is reducible
to it. Let h( x, y) be defined by
(
0 if x ∈ K
h( x, y) '
undefined otherwise

Note that h( x, y) does not depend on y at all. It should not be hard to see that
h is partial computable: on input x, y, the we compute h by first simulating the
function ϕ x on input x; if this computation halts, h( x, y) outputs 0 and halts.
So h( x, y) is just Z (µs T ( x, x, s)), where Z is the constant zero function.
Using the s-m-n theorem, there is a primitive recursive function k( x ) such
that for every x and y,
(
0 if x ∈ K
ϕk( x) (y) =
undefined otherwise

So ϕk( x) is total if x ∈ K, and undefined otherwise. Thus, k is a reduction of K


to Tot.

It turns out that Tot is not even computably enumerable—its complexity


lies further up on the “arithmetic hierarchy.” But we will not worry about this
strengthening here.

Release: (None) ((None)) 373


CHAPTER 27. COMPUTABILITY THEORY

27.19 Rice’s Theorem


If you think about it, you will see that the specifics of Tot do not play into the
proof of ??. We designed h( x, y) to act like the constant function j(y) = 0 ex-
actly when x is in K; but we could just as well have made it act like any other
partial computable function under those circumstances. This observation lets
us state a more general theorem, which says, roughly, that no nontrivial prop-
erty of computable functions is decidable.
Keep in mind that ϕ0 , ϕ1 , ϕ2 , . . . is our standard enumeration of the partial
computable functions.

Theorem 27.21 (Rice’s Theorem). Let C be any set of partial computable functions,
and let A = {n : ϕn ∈ C }. If A is computable, then either C is ∅ or C is the set of
all the partial computable functions.

An index set is a set A with the property that if n and m are indices which
“compute” the same function, then either both n and m are in A, or neither is.
It is not hard to see that the set A in the theorem has this property. Conversely,
if A is an index set and C is the set of functions computed by these indices,
then A = {n : ϕn ∈ C }.
With this terminology, Rice’s theorem is equivalent to saying that no non-
trivial index set is decidable. To understand what the theorem says, it is
helpful to emphasize the distinction between programs (say, in your favorite
programming language) and the functions they compute. There are certainly
questions about programs (indices), which are syntactic objects, that are com-
putable: does this program have more than 150 symbols? Does it have more
than 22 lines? Does it have a “while” statement? Does the string “hello world”
every appear in the argument to a “print” statement? Rice’s theorem says that
no nontrivial question about the program’s behavior is computable. This in-
cludes questions like these: does the program halt on input 0? Does it ever
halt? Does it ever output an even number?

Proof of Rice’s theorem. Suppose C is neither ∅ nor the set of all the partial com-
putable functions, and let A be the set of indices of functions in C. We will
show that if A were computable, we could solve the halting problem; so A is
not computable.
Without loss of generality, we can assume that the function f which is
nowhere defined is not in C (otherwise, switch C and its complement in the
argument below). Let g be any function in C. The idea is that if we could
decide A, we could tell the difference between indices computing f , and in-
dices computing g; and then we could use that capability to solve the halting
problem.

374 Release: (None) ((None))


27.19. RICE’S THEOREM

Here’s how. Using the universal computation predicate, we can define a


function (
undefined if ϕ x ( x ) ↑
h( x, y) '
g(y) otherwise.

To compute h, first we try to compute ϕ x ( x ); if that computation halts, we go


on to compute g(y); and if that computation halts, we return the output. More
formally, we can write

h( x, y) ' P02 ( g(y), Un( x, x )).

where P02 (z0 , z1 ) = z0 is the 2-place projection function returning the 0-th ar-
gument, which is computable.
Then h is a composition of partial computable functions, and the right side
is defined and equal to g(y) just when Un( x, x ) and g(y) are both defined.
Notice that for a fixed x, if ϕ x ( x ) is undefined, then h( x, y) is undefined for
every y; and if ϕ x ( x ) is defined, then h( x, y) ' g(y). So, for any fixed value
of x, either h( x, y) acts just like f or it acts just like g, and deciding whether or
not ϕ x ( x ) is defined amounts to deciding which of these two cases holds. But
this amounts to deciding whether or not h x (y) ' h( x, y) is in C or not, and if
A were computable, we could do just that.
More formally, since h is partial computable, it is equal to the function ϕk
for some index k. By the s-m-n theorem there is a primitive recursive function
s such that for each x, ϕs(k,x) (y) = h x (y). Now we have that for each x, if
ϕ x ( x ) ↓, then ϕs(k,x) is the same function as g, and so s(k, x ) is in A. On the
other hand, if ϕ x ( x ) ↑, then ϕs(k,x) is the same function as f , and so s(k, x )
is not in A. In other words we have that for every x, x ∈ K if and only if
s(k, x ) ∈ A. If A were computable, K would be also, which is a contradiction.
So A is not computable.

Rice’s theorem is very powerful. The following immediate corollary shows


some sample applications.

Corollary 27.22. The following sets are undecidable.

1. { x : 17 is in the range of ϕ x }

2. { x : ϕ x is constant}

3. { x : ϕ x is total}

4. { x : whenever y < y0 , ϕ x (y) ↓, and if ϕ x (y0 ) ↓, then ϕ x (y) < ϕ x (y0 )}

Proof. These are all nontrivial index sets.

Release: (None) ((None)) 375


CHAPTER 27. COMPUTABILITY THEORY

27.20 The Fixed-Point Theorem


Let’s consider the halting problem again. As temporary notation, let us write
p ϕ x (y)q for h x, yi; think of this as representing a “name” for the value ϕ x (y).
With this notation, we can reword one of our proofs that the halting problem
is undecidable.
Question: is there a computable function h, with the following property?
For every x and y, (
1 if ϕ x (y) ↓
h (p ϕ x ( y )q) =
0 otherwise.
Answer: No; otherwise, the partial function
(
0 if h(p ϕ x ( x )q) = 0
g( x ) '
undefined otherwise

would be computable, and so have some index e. But then we have


(
0 if h(p ϕe (e)q) = 0
ϕe (e) '
undefined otherwise,

in which case ϕe (e) is defined if and only if it isn’t, a contradiction.


Now, take a look at the equation with ϕe . There is an instance of self-
reference there, in a sense: we have arranged for the value of ϕe (e) to depend
on p ϕe (e)q, in a certain way. The fixed-point theorem says that we can do this,
in general—not just for the sake of proving contradictions.
?? gives two equivalent ways of stating the fixed-point theorem. Logically
speaking, the fact that the statements are equivalent follows from the fact that
they are both true; but what we really mean is that each one follows straight-
forwardly from the other, so that they can be taken as alternative statements
of the same theorem.
Lemma 27.23. The following statements are equivalent:
1. For every partial computable function g( x, y), there is an index e such that for
every y,
ϕe (y) ' g(e, y).

2. For every computable function f ( x ), there is an index e such that for every y,
ϕ e ( y ) ' ϕ f ( e ) ( y ).

Proof. (1) ⇒ (2): Given f , define g by g( x, y) ' Un( f ( x ), y). Use (1) to get an
index e such that for every y,
ϕe (y) = Un( f (e), y)
= ϕ f ( e ) ( y ).

376 Release: (None) ((None))


27.20. THE FIXED-POINT THEOREM

(2) ⇒ (1): Given g, use the s-m-n theorem to get f such that for every x
and y, ϕ f ( x) (y) ' g( x, y). Use (2) to get an index e such that

ϕe ( y ) = ϕ f (e) ( y )
= g(e, y).
This concludes the proof.
Before showing that statement (1) is true (and hence (2) as well), consider
how bizarre it is. Think of e as being a computer program; statement (1) says
that given any partial computable g( x, y), you can find a computer program
e that computes ge (y) ' g(e, y). In other words, you can find a computer
program that computes a function that references the program itself.
Theorem 27.24. The two statements in ?? are true. Specifically, for every partial
computable function g( x, y), there is an index e such that for every y,
ϕe (y) ' g(e, y).
Proof. The ingredients are already implicit in the discussion of the halting
problem above. Let diag( x ) be a computable function which for each x re-
turns an index for the function f x (y) ' ϕ x ( x, y), i.e.
ϕdiag( x) (y) ' ϕ x ( x, y).
Think of diag as a function that transforms a program for a 2-ary function into
a program for a 1-ary function, obtained by fixing the original program as its
first argument. The function diag can be defined formally as follows: first
define s by
s( x, y) ' Un2 ( x, x, y),
where Un2 is a 3-ary function that is universal for partial computable 2-ary
functions. Then, by the s-m-n theorem, we can find a primitive recursive func-
tion diag satisfying
ϕdiag( x) (y) ' s( x, y).
Now, define the function l by
l ( x, y) ' g(diag( x ), y).
and let plq be an index for l. Finally, let e = diag(plq). Then for every y, we
have
ϕe (y) ' ϕdiag(plq) (y)
' ϕplq (plq, y)
' l (plq, y)
' g(diag(plq), y)
' g(e, y),
as required.

Release: (None) ((None)) 377


CHAPTER 27. COMPUTABILITY THEORY

What’s going on? Suppose you are given the task of writing a computer
program that prints itself out. Suppose further, however, that you are working
with a programming language with a rich and bizarre library of string func-
tions. In particular, suppose your programming language has a function diag
which works as follows: given an input string s, diag locates each instance of
the symbol ‘x’ occuring in s, and replaces it by a quoted version of the original
string. For example, given the string

hello x world

as input, the function returns

hello ’hello x world’ world

as output. In that case, it is easy to write the desired program; you can check
that

print(diag(’print(diag(x))’))

does the trick. For more common programming languages like C++ and Java,
the same idea (with a more involved implementation) still works.
We are only a couple of steps away from the proof of the fixed-point theo-
rem. Suppose a variant of the print function print( x, y) accepts a string x and
another numeric argument y, and prints the string x repeatedly, y times. Then
the “program”

getinput(y); print(diag(’getinput(y); print(diag(x), y)’), y)

prints itself out y times, on input y. Replacing the getinput—print—diag


skeleton by an arbitrary funtion g( x, y) yields

g(diag(’g(diag(x), y)’), y)

which is a program that, on input y, runs g on the program itself and y. Think-
ing of “quoting” with “using an index for,” we have the proof above.
For now, it is o.k. if you want to think of the proof as formal trickery, or
black magic. But you should be able to reconstruct the details of the argument
given above. When we prove the incompleteness theorems (and the related
“fixed-point theorem”) we will discuss other ways of understanding why it
works.
The same idea can be used to get a “fixed point” combinator. Suppose you
have a lambda term g, and you want another term k with the property that k
is β-equivalent to gk. Define terms

diag( x ) = xx

and
l ( x ) = g(diag( x ))

378 Release: (None) ((None))


27.21. APPLYING THE FIXED-POINT THEOREM

using our notational conventions; in other words, l is the term λx. g( xx ). Let
k be the term ll. Then we have
k = (λx. g( xx ))(λx. g( xx ))
. g((λx. g( xx ))(λx. g( xx )))
= gk.
If one takes
Y = λg. ((λx. g( xx ))(λx. g( xx )))
then Yg and g(Yg) reduce to a common term; so Yg ≡ β g(Yg). This is known
as “Curry’s combinator.” If instead one takes
Y = (λxg. g( xxg))(λxg. g( xxg))
then in fact Yg reduces to g(Yg), which is a stronger statement. This latter
version of Y is known as “Turing’s combinator.”

27.21 Applying the Fixed-Point Theorem


The fixed-point theorem essentially lets us define partial computable func-
tions in terms of their indices. For example, we can find an index e such that
for every y,
ϕe (y) = e + y.
As another example, one can use the proof of the fixed-point theorem to de-
sign a program in Java or C++ that prints itself out.
Remember that if for each e, we let We be the domain of ϕe , then the se-
quence W0 , W1 , W2 , . . . enumerates the computably enumerable sets. Some of
these sets are computable. One can ask if there is an algorithm which takes as
input a value x, and, if Wx happens to be computable, returns an index for its
characteristic function. The answer is “no,” there is no such algorithm:
Theorem 27.25. There is no partial computable function f with the following prop-
erty: whenever We is computable, then f (e) is defined and ϕ f (e) is its characteristic
function.

Proof. Let f be any computable function; we will construct an e such that We


is computable, but ϕ f (e) is not its characteristic function. Using the fixed point
theorem, we can find an index e such that
(
0 if y = 0 and ϕ f (e) (0) ↓= 0
ϕe (y) '
undefined otherwise.
That is, e is obtained by applying the fixed-point theorem to the function de-
fined by (
0 if y = 0 and ϕ f ( x) (0) ↓= 0
g( x, y) '
undefined otherwise.

Release: (None) ((None)) 379


CHAPTER 27. COMPUTABILITY THEORY

Informally, we can see that g is partial computable, as follows: on input x and


y, the algorithm first checks to see if y is equal to 0. If it is, the algorithm
computes f ( x ), and then uses the universal machine to compute ϕ f ( x) (0). If
this last computation halts and returns 0, the algorithm returns 0; otherwise,
the algorithm doesn’t halt.
But now notice that if ϕ f (e) (0) is defined and equal to 0, then ϕe (y) is de-
fined exactly when y is equal to 0, so We = {0}. If ϕ f (e) (0) is not defined,
or is defined but not equal to 0, then We = ∅. Either way, ϕ f (e) is not the
characteristic function of We , since it gives the wrong answer on input 0.

27.22 Defining Functions using Self-Reference


It is generally useful to be able to define functions in terms of themselves.
For example, given computable functions k, l, and m, the fixed-point lemma
tells us that there is a partial computable function f satisfying the following
equation for every y:
(
k(y) if l (y) = 0
f (y) '
f (m(y)) otherwise.

Again, more specifically, f is obtained by letting


(
k(y) if l (y) = 0
g( x, y) '
ϕ x (m(y)) otherwise

and then using the fixed-point lemma to find an index e such that ϕe (y) =
g(e, y).
For a concrete example, the “greatest common divisor” function gcd(u, v)
can be defined by
(
v if 0 = 0
gcd(u, v) '
gcd(mod(v, u), u) otherwise

where mod(v, u) denotes the remainder of dividing v by u. An appeal to the


fixed-point lemma shows that gcd is partial computable. (In fact, this can be
put in the format above, letting y code the pair hu, vi.) A subsequent induction
on u then shows that, in fact, gcd is total.
Of course, one can cook up self-referential definitions that are much fancier
than the examples just discussed. Most programming languages support def-
initions of functions in terms of themselves, one way or another. Note that
this is a little bit less dramatic than being able to define a function in terms
of an index for an algorithm computing the functions, which is what, in full
generality, the fixed-point theorem lets you do.

380 Release: (None) ((None))


27.23. MINIMIZATION WITH LAMBDA TERMS

27.23 Minimization with Lambda Terms


When it comes to the lambda calculus, we’ve shown the following:
1. Every primitive recursive function is represented by a lambda term.
2. There is a lambda term Y such that for any lambda term G, YG . G (YG ).
To show that every partial computable function is represented by some lambda
term, we only need to show the following.
Lemma 27.26. Suppose f ( x, y) is primitive recursive. Let g be defined by
g( x ) ' µy f ( x, y) = 0.
Then g is represented by a lambda term.
Proof. The idea is roughly as follows. Given x, we will use the fixed-point
lambda term Y to define a function h x (n) which searches for a y starting at n;
then g( x ) is just h x (0). The function h x can be expressed as the solution of a
fixed-point equation:
(
n if f ( x, n) = 0
h x (n) '
h x (n + 1) otherwise.
Here are the details. Since f is primitive recursive, it is represented by
some term F. Remember that we also have a lambda term D such that D ( M, N, 0) .
M and D ( M, N, 1) . N. Fixing x for the moment, to represent h x we want to
find a term H (depending on x) satisfying
H (n) ≡ D (n, H (S(n)), F ( x, n)).
We can do this using the fixed-point term Y. First, let U be the term
λh. λz. D (z, (h(Sz)), F ( x, z)),
and then let H be the term YU. Notice that the only free variable in H is x. Let
us show that H satisfies the equation above.
By the definition of Y, we have
H = YU ≡ U (YU ) = U ( H ).
In particular, for each natural number n, we have
H (n) ≡ U ( H, n)
. D (n, H (S(n)), F ( x, n)),
as required. Notice that if you substitute a numeral m for x in the last line, the
expression reduces to n if F (m, n) reduces to 0, and it reduces to H (S(n)) if
F (m, n) reduces to any other numeral.
To finish off the proof, let G be λx. H (0). Then G represents g; in other
words, for every m, G (m) reduces to reduces to g(m), if g(m) is defined, and
has no normal form otherwise.

Release: (None) ((None)) 381


CHAPTER 27. COMPUTABILITY THEORY

Problems
Problem 27.1. Give a reduction of K to K0 .

382 Release: (None) ((None))


Part VI

Turing Machines

383
CHAPTER 27. COMPUTABILITY THEORY

The material in this part is a basic and informal introduction to Turing


machines. It needs more examples and exercises, and perhaps informa-
tion on available Turing machine simulators. The proof of the unsolvabil-
ity of the decision problem uses a successor function, hence all models
are infinite. One could strengthen the result by using a successor rela-
tion instead. There probably are subtle oversights; use these as checks on
students’ attention (but also file issues!).

384 Release: (None) ((None))


Chapter 28

Turing Machine Computations

28.1 Introduction
What does it mean for a function, say, from N to N to be computable? Among
the first answers, and the most well known one, is that a function is com-
putable if it can be computed by a Turing machine. This notion was set out
by Alan Turing in 1936. Turing machines are an example of a model of compu-
tation—they are a mathematically precise way of defining the idea of a “com-
putational procedure.” What exactly that means is debated, but it is widely
agreed that Turing machines are one way of specifying computational proce-
dures. Even though the term “Turing machine” evokes the image of a physi-
cal machine with moving parts, strictly speaking a Turing machine is a purely
mathematical construct, and as such it idealizes the idea of a computational
procedure. For instance, we place no restriction on either the time or memory
requirements of a Turing machine: Turing machines can compute something
even if the computation would require more storage space or more steps than
there are atoms in the universe.
It is perhaps best to think of a Turing machine as a program for a spe-
cial kind of imaginary mechanism. This mechanism consists of a tape and a
read-write head. In our version of Turing machines, the tape is infinite in one di-
rection (to the right), and it is divided into squares, each of which may contain
a symbol from a finite alphabet. Such alphabets can contain any number of dif-
ferent symbols, say, but we will mainly make do with three: ., 0, and 1. When
the mechanism is started, the tape is empty (i.e., each square contains the sym-
bol 0) except for the leftmost square, which contains ., and a finite number of
squares which contain the input. At any time, the mechanism is in one of a
finite number of states. At the outset, the head scans the leftmost square and
in a specified initial state. At each step of the mechanism’s run, the content
of the square currently scanned together with the state the mechanism is in
and the Turing machine program determine what happens next. The Turing
machine program is given by a partial function which takes as input a state q

385
CHAPTER 28. TURING MACHINE COMPUTATIONS

Figure 28.1: A Turing machine executing its program.

and a symbol σ and outputs a triple hq0 , σ0 , D i. Whenever the mechanism is in


state q and reads symbol σ, it replaces the symbol on the current square with
σ0 , the head moves left, right, or stays put according to whether D is L, R, or
N, and the mechanism goes into state q0 .
For instance, consider the situation in ??. The visible part of the tape of
the Turing machine contains the end-of-tape symbol . on the leftmost square,
followed by three 1’s, a 0, and four more 1’s. The head is reading the third
square from the left, which contains a 1, and is in state q1 —we say “the ma-
chine is reading a 1 in state q1 .” If the program of the Turing machine returns,
for input hq1 , 1i, the triple hq2 , 0, N i, the machine would now replace the 1 on
the third square with a 0, leave the read/write head where it is, and switch
to state q2 . If then the program returns hq3 , 0, Ri for input hq2 , 0i, the machine
would now overwrite the 0 with another 0 (effectively, leaving the content of
the tape under the read/write head unchanged), move one square to the right,
and enter state q3 . And so on.
We say that the machine halts when it encounters some state, qn , and sym-
bol, σ such that there is no instruction for hqn , σ i, i.e., the transition function
for input hqn , σ i is undefined. In other words, the machine has no instruction
to carry out, and at that point, it ceases operation. Halting is sometimes repre-
sented by a specific halt state h. This will be demonstrated in more detail later
on.
The beauty of Turing’s paper, “On computable numbers,” is that he presents
not only a formal definition, but also an argument that the definition captures
the intuitive notion of computability. From the definition, it should be clear
that any function computable by a Turing machine is computable in the in-
tuitive sense. Turing offers three types of argument that the converse is true,
i.e., that any function that we would naturally regard as computable is com-
putable by such a machine. They are (in Turing’s words):

1. A direct appeal to intuition.

386 Release: (None) ((None))


28.2. REPRESENTING TURING MACHINES

2. A proof of the equivalence of two definitions (in case the new definition
has a greater intuitive appeal).

3. Giving examples of large classes of numbers which are computable.

Our goal is to try to define the notion of computability “in principle,” i.e.,
without taking into account practical limitations of time and space. Of course,
with the broadest definition of computability in place, one can then go on
to consider computation with bounded resources; this forms the heart of the
subject known as “computational complexity.”

Historical Remarks Alan Turing invented Turing machines in 1936. While


his interest at the time was the decidability of first-order logic, the paper has
been described as a definitive paper on the foundations of computer design.
In the paper, Turing focuses on computable real numbers, i.e., real numbers
whose decimal expansions are computable; but he notes that it is not hard to
adapt his notions to computable functions on the natural numbers, and so on.
Notice that this was a full five years before the first working general purpose
computer was built in 1941 (by the German Konrad Zuse in his parent’s living
room), seven years before Turing and his colleagues at Bletchley Park built the
code-breaking Colossus (1943), nine years before the American ENIAC (1945),
twelve years before the first British general purpose computer—the Manch-
ester Small-Scale Experimental Machine—was built in Manchester (1948), and
thirteen years before the Americans first tested the BINAC (1949). The Manch-
ester SSEM has the distinction of being the first stored-program computer—
previous machines had to be rewired by hand for each new task.

28.2 Representing Turing Machines


Turing machines can be represented visually by state diagrams. The diagrams
are composed of state cells connected by arrows. Unsurprisingly, each state
cell represents a state of the machine. Each arrow represents an instruction
that can be carried out from that state, with the specifics of the instruction
written above or below the appropriate arrow. Consider the following ma-
chine, which has only two internal states, q0 and q1 , and one instruction:

0, 1, R
start q0 q1

Recall that the Turing machine has a read/write head and a tape with the
input written on it. The instruction can be read as if reading a blank in state q0 ,
write a stroke, move right, and move to state q1 . This is equivalent to the transition
function mapping hq0 , 0i to hq1 , 1, Ri.

Release: (None) ((None)) 387


CHAPTER 28. TURING MACHINE COMPUTATIONS

Example 28.1. Even Machine: The following Turing machine halts if, and only
if, there are an even number of strokes on the tape.

0, 0, R
1, 1, R

start q0 q1

1, 1, R

The state diagram corresponds to the following transition function:

δ(q0 , 1) = hq1 , 1, Ri,


δ(q1 , 1) = hq0 , 1, Ri,
δ(q1 , 0) = hq1 , 0, Ri

The above machine halts only when the input is an even number of strokes.
Otherwise, the machine (theoretically) continues to operate indefinitely. For
any machine and input, it is possible to trace through the configurations of the
machine in order to determine the output. We will give a formal definition
of configurations later. For now, we can intuitively think of configurations
as a series of diagrams showing the state of the machine at any point in time
during operation. Configurations show the content of the tape, the state of the
machine and the location of the read/write head.
Let us trace through the configurations of the even machine if it is started
with an input of 4 1s. In this case, we expect that the machine will halt. We
will then run the machine on an input of 3 1s, where the machine will run
forever.
The machine starts in state q0 , scanning the leftmost 1. We can represent
the initial state of the machine as follows:

.10 1110 . . .

The above configuration is straightforward. As can be seen, the machine starts


in state one, scanning the leftmost 1. This is represented by a subscript of the
state name on the first 1. The applicable instruction at this point is δ(q0 , 1) =
hq1 , 1, Ri, and so the machine moves right on the tape and changes to state q1 .

.111 110 . . .

Since the machine is now in state q1 scanning a stroke, we have to “follow”


the instruction δ(q1 , 1) = hq0 , 1, Ri. This results in the configuration

.1110 10 . . .

388 Release: (None) ((None))


28.2. REPRESENTING TURING MACHINES

As the machine continues, the rules are applied again in the same order, re-
sulting in the following two configurations:

.11111 0 . . .

.111100 . . .
The machine is now in state q0 scanning a blank. Based on the transition
diagram, we can easily see that there is no instruction to be carried out, and
thus the machine has halted. This means that the input has been accepted.
Suppose next we start the machine with an input of three strokes. The first
few configurations are similar, as the same instructions are carried out, with
only a small difference of the tape input:

.10 110 . . .

.111 10 . . .
.1110 0 . . .
.11101 . . .
The machine has now traversed past all the strokes, and is reading a blank
in state q1 . As shown in the diagram, there is an instruction of the form
δ(q1 , 0) = hq1 , 0, Ri. Since the tape is infinitely blank to the right, the ma-
chine will continue to execute this instruction forever, staying in state q1 and
moving ever further to the right. The machine will never halt, and does not
accept the input.
It is important to note that not all machines will halt. If halting means that
the machine runs out of instructions to execute, then we can create a machine
that never halts simply by ensuring that there is an outgoing arrow for each
symbol at each state. The even machine can be modified to run infinitely by
adding an instruction for scanning a blank at q0 .

Example 28.2.
0, 0, R 0, 0, R
1, 1, R

start q0 q1

1, 1, R

Machine tables are another way of representing Turing machines. Machine


tables have the tape alphabet displayed on the x-axis, and the set of machine
states across the y-axis. Inside the table, at the intersection of each state and
symbol, is written the rest of the instruction—the new state, new symbol, and
direction of movement. Machine tables make it easy to determine in what

Release: (None) ((None)) 389


CHAPTER 28. TURING MACHINE COMPUTATIONS

state, and for what symbol, the machine halts. Whenever there is a gap in the
table is a possible point for the machine to halt. Unlike state diagrams and
instruction sets, where the points at which the machine halts are not always
immediately obvious, any halting points are quickly identified by finding the
gaps in the machine table.

Example 28.3. The machine table for the even machine is:

0 1
q0 1, q1 , R
q1 0, q1 , 0 1, q0 , R

As we can see, the machine halts when scanning a blank in state q0 .

So far we have only considered machines that read and accept input. How-
ever, Turing machines have the capacity to both read and write. An example
of such a machine (although there are many, many examples) is a doubler. A
doubler, when started with a block of n strokes on the tape, outputs a block
of 2n strokes.

Example 28.4. Before building a doubler machine, it is important to come


up with a strategy for solving the problem. Since the machine (as we have
formulated it) cannot remember how many strokes it has read, we need to
come up with a way to keep track of all the strokes on the tape. One such way
is to separate the output from the input with a blank. The machine can then
erase the first stroke from the input, traverse over the rest of the input, leave a
blank, and write two new strokes. The machine will then go back and find the
second stroke in the input, and double that one as well. For each one stroke of
input, it will write two strokes of output. By erasing the input as the machine
goes, we can guarantee that no stroke is missed or doubled twice. When the
entire input is erased, there will be 2n strokes left on the tape.

1, 1, R 1, 1, R

1, 0, R 0, 0, R
start q0 q1 q2

0, 0, R 0, 1, R

q5 q4 q3
0, 0, L 1, 1, L

1, 1, L 1, 1, L 0, 1, L

390 Release: (None) ((None))


28.3. TURING MACHINES

28.3 Turing Machines


The formal definition of what constitutes a Turing machine looks abstract,
but is actually simple: it merely packs into one mathematical structure all
the information needed to specify the workings of a Turing machine. This
includes (1) which states the machine can be in, (2) which symbols are allowed
to be on the tape, (3) which state the machine should start in, and (4) what the
instruction set of the machine is.
Definition 28.5 (Turing machine). A Turing machine T = h Q, Σ, q0 , δi consists
of
1. a finite set of states Q,
2. a finite alphabet Σ which includes . and 0,
3. an initial state q0 ∈ Q,
4. a finite instruction set δ : Q × Σ →
7 Q × Σ × { L, R, N }.
The partial function δ is also called the transition function of T.
We assume that the tape is infinite in one direction only. For this reason
it is useful to designate a special symbol . as a marker for the left end of the
tape. This makes it easier for Turing machine programs to tell when they’re
“in danger” of running off the tape.
Example 28.6. Even Machine: The even machine is formally the quadruple
h Q, Σ, q0 , δi where
Q = { q0 , q1 }
Σ = {., 0, 1},
δ(q0 , 1) = hq1 , 1, Ri,
δ(q1 , 1) = hq0 , 1, Ri,
δ(q1 , 0) = hq1 , 0, Ri.

28.4 Configurations and Computations


Recall tracing through the configurations of the even machine earlier. The
imaginary mechanism consisting of tape, read/write head, and Turing ma-
chine program is really just in intuitive way of visualizing what a Turing ma-
chine computation is. Formally, we can define the computation of a Turing
machine on a given input as a sequence of configurations—and a configuration
in turn is a sequence of symbols (corresponding to the contents of the tape
at a given point in the computation), a number indicating the position of the
read/write head, and a state. Using these, we can define what the Turing
machine M computes on a given input.

Release: (None) ((None)) 391


CHAPTER 28. TURING MACHINE COMPUTATIONS

Definition 28.7 (Configuration). A configuration of Turing machine M = h Q, Σ, q0 , δi


is a triple hC, n, qi where

1. C ∈ Σ∗ is a finite sequence of symbols from Σ,

2. n ∈ N is a number < len(C ), and

3. q ∈ Q

Intuitively, the sequence C is the content of the tape (symbols of all squares
from the leftmost square to the last non-blank or previously visited square), n
is the number of the square the read/write head is scanning (beginning with
0 being the number of the leftmost square), and q is the current state of the
machine.

The potential input for a Turing machine is a sequence of symbols, usually


a sequence that encodes a number in some form. The initial configuration of
the Turing machine is that configuration in which we start the Turing machine
to work on that input: the tape contains the tape end marker immediately
followed by the input written on the squares to the right, the read/write head
is scanning the leftmost square of the input (i.e., the square to the right of the
left end marker), and the mechanism is in the designated start state q0 .

Definition 28.8 (Initial configuration). The initial configuration of M for input


I ∈ Σ∗ is
h. _ I, 1, q0 i

The _ symbol is for concatenation—we want to ensure that there are no


blanks between the left end marker and the beginning of the input.

Definition 28.9. We say that a configuration hC, n, qi yields hC 0 , n0 , q0 i in one


step (according to M), iff

1. the n-th symbol of C is σ,

2. the instruction set of M specifies δ(q, σ ) = hq0 , σ0 , D i,

3. the n-th symbol of C 0 is σ0 , and

4. a) D = L and n0 = n − 1 if n > 0, otherwise n0 = 0, or


b) D = R and n0 = n + 1, or
c) D = N and n0 = n,

5. if n0 > len(C ), then len(C 0 ) = len(C ) + 1 and the n0 -th symbol of C 0 is 0.

6. for all i such that i < len(C 0 ) and i 6= n, C 0 (i ) = C (i ),

392 Release: (None) ((None))


28.5. UNARY REPRESENTATION OF NUMBERS

Definition 28.10. A run of M on input I is a sequence Ci of configurations of


M, where C0 is the initial configuration of M for input I, and each Ci yields
Ci+1 in one step.
We say that M halts on input I after k steps if Ck = hC, n, qi, the nth symbol
of C is σ, and δ(q, σ ) is undefined. In that case, the output of M for input I
is O, where O is a string of symbols not beginning or ending in 0 such that
C = . _ 0i _ O _ 0 j for some i, j ∈ N.

According to this definition, the output O of M always begins and ends in


a symbol other than 0, or, if at time k the entire tape is filled with 0 (except for
the leftmost .), O is the empty string.

28.5 Unary Representation of Numbers


Turing machines work on sequences of symbols written on their tape. De-
pending on the alphabet a Turing machine uses, these sequences of symbols
can represent various inputs and outputs. Of particular interest, of course, are
Turing machines which compute arithmetical functions, i.e., functions of natu-
ral numbers. A simple way to represent positive integers is by coding them
as sequences of a single symbol 1. If n ∈ N, let 1n be the empty sequence if
n = 0, and otherwise the sequence consisting of exactly n 1’s.

Definition 28.11 (Computation). A Turing machine M computes the function


f : Nn → N iff M halts on input

1k1 01k2 0 . . . 01kn

with output 1 f (k1 ,...,kn ) .

Example 28.12. Addition: Build a machine that, when given an input of two
non-empty strings of 1’s of length n and m, computes the function f (n, m) =
n + m.
We want to come up with a machine that starts with two blocks of strokes
on the tape and halts with one block of strokes. We first need a method to
carry out. The input strokes are separated by a blank, so one method would
be to write a stroke on the square containing the blank, and erase the first (or
last) stroke. This would result in a block of n + m 1’s. Alternatively, we could
proceed in a similar way to the doubler machine, by erasing a stroke from the
first block, and adding one to the second block of strokes until the first block
has been removed completely. We will proceed with the former example.

1, 1, R 1, 1, R 1, 0, N

0, 1, R 0, 0, L
start q0 q1 q2

Release: (None) ((None)) 393


CHAPTER 28. TURING MACHINE COMPUTATIONS

28.6 Halting States


Although we have defined our machines to halt only when there is no in-
struction to carry out, common representations of Turing machines have a
dedicated halting state, h, such that h ∈ Q.
The idea behind a halting state is simple: when the machine has finished
operation (it is ready to accept input, or has finished writing the output), it
goes into a state h where it halts. Some machines have two halting states, one
that accepts input and one that rejects input.

Example 28.13. Halting States. To elucidate this concept, let us begin with an
alteration of the even machine. Instead of having the machine halt in state q0
if the input is even, we can add an instruction to send the machine into a halt
state.
0, 0, R
1, 1, R

start q0 q1

1, 1, R
0, 0, N

Let us further expand the example. When the machine determines that the
input is odd, it never halts. We can alter the machine to include a reject state
by replacing the looping instruction with an instruction to go to a reject state r.

1, 1, R

start q0 q1

1, 1, R
0, 0, N 0, 0, N

h r

Adding a dedicated halting state can be advantageous in cases like this,


where it makes explicit when the machine accepts/rejects certain inputs. How-
ever, it is important to note that no computing power is gained by adding a
dedicated halting state. Similarly, a less formal notion of halting has its own

394 Release: (None) ((None))


28.7. COMBINING TURING MACHINES

advantages. The definition of halting used so far in this chapter makes the
proof of the Halting Problem intuitive and easy to demonstrate. For this rea-
son, we continue with our original definition.

28.7 Combining Turing Machines

The examples of Turing machines we have seen so far have been fairly sim-
ple in nature. But in fact, any problem that can be solved with any modern
programming language can als o be solved with Turing machines. To build
more complex Turing machines, it is important to convince ourselves that we
can combine them, so we can build machines to solve more complex prob-
lems by breaking the procedure into simpler parts. If we can find a natural
way to break a complex problem down into constituent parts, we can tackle
the problem in several stages, creating several simple Turing machines and
combining then into one machine that can solve the problem. This point is
especially important when tackling the Halting Problem in the next section.

Example 28.14. Combining Machines: Design a machine that computes the


function f (m, n) = 2(m + n).

In order to build this machine, we can combine two machines we are al-
ready familiar with: the addition machine, and the doubler. We begin by
drawing a state diagram for the addition machine.

1, 1, R 1, 1, R 1, 0, N

0, 1, R 0, 0, L
start q0 q1 q2

Instead of halting at state q2 , we want to continue operation in order to double


the output. Recall that the doubler machine erases the first stroke in the input
and writes two strokes in a separate output. Let’s add an instruction to make
sure the tape head is reading the first stroke of the output of the addition

Release: (None) ((None)) 395


CHAPTER 28. TURING MACHINE COMPUTATIONS

machine.

1, 1, R 1, 1, R

0, 1, R 0, 0, L
start q0 q1 q2

1, 0, L

1, 1, L q3

., ., R

q4

It is now easy to double the input—all we have to do is connect the doubler


machine onto state q4 . This requires renaming the states of the doubler ma-
chine so that they start at q4 instead of q0 —this way we don’t end up with two

396 Release: (None) ((None))


28.8. VARIANTS OF TURING MACHINES

starting states. The final diagram should look like:

1, 1, R 1, 1, R

0, 1, R 0, 0, L
start q0 q1 q2

1, 0, L

1, 1, L q3

1, 1, L ., ., R

0, 0, L 0, 0, R
q8 q9 q4

1, 1, L 1, 1, L 1, 0, R

0, 1, L q7 q6 q5
0, 1, R 0, 0, R

1, 1, R 1, 1, R

28.8 Variants of Turing Machines


There are in fact many possible ways to define Turing machines, of which
ours is only one. In some ways, our definition is more liberal than others.
We allow arbitrary finite alphabets, a more restricted definition might allow
only two tape symbols, 1 and 0. We allow the machine to write a symbol to
the tape and move at the same time, other definitions allow either writing or
moving. We allow the possibility of writing without moving the tape head,
other definitions leave out the N “instruction.” In other ways, our definition
is more restrictive. We assumed that the tape is infinite in one direction only,
other definitions allow the tape to be infinite both to the left and the right.
In fact, one can even even allow any number of separate tapes, or even an
infinite grid of squares. We represent the instruction set of the Turing machine
by a transition function; other definitions use a transition relation where the
machine has more than one possible instruction in any given situation.

Release: (None) ((None)) 397


CHAPTER 28. TURING MACHINE COMPUTATIONS

This last relaxation of the definition is particularly interesting. In our def-


inition, when the machine is in state q reading symbol σ, δ(q, σ ) determines
what the new symbol, state, and tape head position is. But if we allow the
instruction set to be a relation between current state-symbol pairs hq, σ i and
new state-symbol-direction triples hq0 , σ0 , D i, the action of the Turing machine
may not be uniquely determined—the instruction relation may contain both
hq, σ, q0 , σ0 , D i and hq, σ, q00 , σ00 , D 0 i. In this case we have a non-deterministic
Turing machine. These play an important role in computational complexity
theory.
There are also different conventions for when a Turing machine halts: we
say it halts when the transition function is undefined, other definitions require
the machine to be in a special designated halting state. Since the tapes of our
turing machines are infinite in one direction only, there ae cases where a Tur-
ing machine can’t properly carry out an instruction: if it reads the leftmost
square and is supposed to move left. According to our definition, it just stays
put instead, but we could have defined it so that it halts when that happens.
There are also different ways of representing numbers (and hence the input-
output function computed by a Turing machine): we use unary representa-
tion, but you can also use binary representation (this requires two symbols in
addition to 0).
Now here is an interesting fact: none of these variations matters as to
which functions are Turing computable. If a function is Turing computable ac-
cording to one definition, it is Turing computable according to all of them.

28.9 The Church-Turing Thesis


Turing machines are supposed to be a precise replacement for the concept of
an effective procedure. Turing took it that anyone who grasped the concept of
an effective procedure and the concept of a Turing machine would have the
intuition that anything that could be done via an effective procedure could be
done by Turing machine. This claim is given support by the fact that all the
other proposed precise replacements for the concept of an effective procedure
turn out to be extensionally equivalent to the concept of a Turing machine—
that is, they can compute exactly the same set of functions. This claim is called
the Church-Turing thesis.

Definition 28.15 (Church-Turing thesis). The Church-Turing Thesis states that


anything computable via an effective procedure is Turing computable.

The Church-Turing thesis is appealed to in two ways. The first kind of


use of the Church-Turing thesis is an excuse for laziness. Suppose we have a
description of an effective procedure to compute something, say, in “pseudo-
code.” Then we can invoke the Church-Turing thesis to justify the claim that

398 Release: (None) ((None))


28.9. THE CHURCH-TURING THESIS

the same function is computed by some Turing machine, eve if we have not in
fact constructed it.
The other use of the Church-Turing thesis is more philosophically interest-
ing. It can be shown that there are functions whch cannot be computed by a
Turing machines. From this, using the Church-Turing thesis, one can conclude
that it cannot be effectively computed, using any procedure whatsoever. For
if there were such a procedure, by the Church-Turing thesis, it would follow
that there would be a Turing machine. So if we can prove that there is no
Turing machine that computes it, there also can’t be an effective procedure.
In particular, the Church-Turing thesis is invoked to claim that the so-called
halting problem not only cannot be solved by Turing machines, it cannot be
effectively solved at all.

Problems
Problem 28.1. Choose an arbitary input and trace through the configurations
of the doubler machine in ??.
Problem 28.2. The double machine in ?? writes its output to the right of the
input. Come up with a new method for solving the doubler problem which
generates its output immediately to the right of the end-of-tape marker. Build
a machine that executes your method. Check that your machine works by
tracing through the configurations.
Problem 28.3. Design a Turing-machine with alphabet {0, A, B} that accepts
any string of As and Bs where the number of As is the same as the number of
Bs and all the As precede all the Bs, and rejects any string where the number
of As is not equal to the number of Bs or the As do not precede all the Bs.
(E.g., the machine should accept AABB, and AAABBB, but reject both AAB
and AABBAABB.)
Problem 28.4. Design a Turing-machine with alphabet {0, A, B} that takes as
input any string α of As and Bs and duplicates them to produce an output of
the form αα. (E.g. input ABBA should result in output ABBAABBA).
Problem 28.5. Alphabetical?: Design a Turing-machine with alphabet {0, A, B}
that when given as input a finite sequence of As and Bs checks to see if all
the As appear left of all the Bs or not. The machine should leave the input
string on the tape, and output either halt if the string is “alphabetical”, or
loop forever if the string is not.
Problem 28.6. Alphabetizer: Design a Turing-machine with alphabet {0, A, B}
that takes as input a finite sequence of As and Bs rearranges them so that
all the As are to the left of all the Bs. (e.g., the sequence BABAA should
become the sequence AAABB, and the sequence ABBABB should become
the sequence AABBBB).

Release: (None) ((None)) 399


CHAPTER 28. TURING MACHINE COMPUTATIONS

Problem 28.7. Trace through the configurations of the machine for input h3, 5i.

Problem 28.8. Subtraction: Design a Turing machine that when given an input
of two non-empty strings of strokes of length n and m, where n > m, computes
the function f (n, m) = n − m.

Problem 28.9. Equality: Design a Turing machine to compute the following


function: (
1 if x = y
equality( x, y) =
0 if x 6= y
where x and y are integers greater than 0.

Problem 28.10. Design a Turing machine to compute the function min( x, y)


where x and y are positive integers represented on the tape by strings of 1’s
separated by a 0. You may use additional symbols in the alphabet of the ma-
chine.
The function min selects the smallest value from its arguments, so min(3, 5) =
3, min(20, 16) = 16, and min(4, 4) = 4, and so on.

400 Release: (None) ((None))


Chapter 29

Undecidability

29.1 Introduction
It might seem obvious that not every function, even every arithmetical func-
tion, can be computable. There are just too many, whose behavior is too
complicated. Functions defined from the decay of radioactive particles, for
instance, or other chaotic or random behavior. Suppose we start counting 1-
second intervals from a given time, and define the function f (n) as the num-
ber of particles in the universe that decay in the n-th 1-second interval after
that initial moment. This seems like a candidate for a function we cannot ever
hope to compute.
But it is one thing to not be able to imagine how one would compute such
functions, and quite another to actually prove that they are uncomputable.
In fact, even functions that seem hopelessly complicated may, in an abstract
sense, be computable. For instance, suppose the universe is finite in time—
some day, in the very distant future the universe will contract into a single
point, as some cosmological theories predict. Then there is only a finite (but
incredibly large) number of seconds from that initial moment for which f (n)
is defined. And any function which is defined for only finitely many inputs is
computable: we could list the outputs in one big table, or code it in one very
big Turing machine state transition diagram.
We are often interested in special cases of functions whose values give the
answers to yes/no questions. For instance, the question “is n a prime num-
ber?” is associated with the function
(
1 if n is prime
isprime(n) =
0 otherwise.
We say that a yes/no question can be effectively decided, if the associated 1/0-
valued function is effectively computable.
To prove mathematically that there are functions which cannot be effec-
tively computed, or problems that cannot effectively decided, it is essential to

401
CHAPTER 29. UNDECIDABILITY

fix a specific model of computation, and show about it that there are functions
it cannot compute or problems it cannot decide. We can show, for instance,
that not every function can be computed by Turing machines, and not ev-
ery problem can be decided by Turing machines. We can then appeal to the
Church-Turing thesis to conclude that not only are Turing machines not pow-
erful enough to compute every function, but no effective procedure can.

The key to proving such negative results is the fact that we can assign
numbers to Turing machines themselves. The easiest way to do this is to enu-
merate them, perhaps by fixing a specific way to write down Turing machines
and their programs, and then listing them in a systematic fashion. Once we
see that this can be done, then the existence of Turing-uncomputable functions
follows by simple cardinality considerations: the set of functions from N to N
(in fact, even just from N to {0, 1}) are non-enumerable, but since we can enu-
merate all the Turing machines, the set of Turing-computable functions is only
denumerable.

We can also define specific functions and problems which we can prove
to be uncomputable and undecidable, respectively. One such problem is the
so-called Halting Problem. Turing machines can be finitely described by list-
ing their instructions. Such a description of a Turing machine, i.e., a Turing
machine program, can of course be used as input to another Turing machine.
So we can consider Turing machines that decide questions about other Tur-
ing machines. One particularly interesting question is this: “Does the given
Turing machine eventually halt when started on input n?” It would be nice if
there were a Turing machine that could decide this question: think of it as a
quality-control Turing machine which ensures that Turing machines don’t get
caught in infinite loops and such. The interestign fact, which Turing proved,
is that there cannot be such a Turing machine. There cannot be a single Turing
machine which, when started on input consisting of a description of a Turing
machine M and some number n, will always halt with either output 1 or 0
according to whether M machine would have halted when started on input n
or not.

Once we have examples of specific undecidable problems we can use them


to show that other problems are undecidable, too. For instance, one celebrated
undecidable problem is the question, “Is the first-order formula ϕ valid?”.
There is no Turing machine which, given as input a first-order formula ϕ, is
guaranteed to halt with output 1 or 0 according to whether ϕ is valid or not.
Historically, the question of finding a procedure to effectively solve this prob-
lem was called simply “the” decision problem; and so we say that the decision
problem is unsolvable. Turing and Church proved this result independently
at around the same time, so it is also called the Church-Turing Theorem.

402 Release: (None) ((None))


29.2. ENUMERATING TURING MACHINES

29.2 Enumerating Turing Machines


We can show that the set of all Turing-machines is enumerable. This follows
from the fact that each Turing machine can be finitely described. The set of
states and the tape vocabulary are finite sets. The transition function is a par-
tial function from Q × Σ to Q × Σ × { L, R, N }, and so likewise can be specified
by listing its values for the finitely many argument pairs for which it is de-
fined. Of course, strictly speaking, the states and vocabulary can be anything;
but the behavior of the Turing machine is independent of which objects serve
as states and vocabulary. So we may assume, for instance, that the states and
vocabulary symbols are natural numbers, or that the states and vocabulary
are all strings of letters and digits.
Suppose we fix a denumerable vocabulary for specifying Turing machines:
σ0 = ., σ1 = 0, σ2 = 1, σ3 , . . . , R, L, N, q0 , q1 , . . . . Then any Turing machine
can be specified by some finite string of symbols from this alphabet (though
not every finite string of symbols specifies a Turing machine). For instance,
suppose we have a Turing machine M = h Q, Σ, q, δi where
Q = {q00 , . . . , q0n } ⊆ {q0 , q1 , . . . } and
Σ = {., σ10 , σ20 , . . . , σm
0
} ⊆ {σ0 , σ1 , . . . }.
We could specify it by the string
q00 q10 . . . q0n . σ10 . . . σm
0
. q . S(σ00 , q00 ) . . . . . S(σm0 , q0n )
where S(σi0 , q0j ) is the string σi0 q0j δ(σi0 , q0j ) if δ(σi0 , q0j ) is defined, and σi0 q0j other-
wise.
Theorem 29.1. There are functions from N to N which are not Turing computable.
Proof. We know that the set of finite strings of symbols from a denumerable
alphabet is enumerable. This gives us that the set of descriptions of Turing
machines, as a subset of the finite strings from the enumerable vocabulary
{q0 , q1 , . . . , ., σ1 , σ2 , . . . }, is itself enumerable. Since every Turing computable
function is computed by some (in fact, many) Turing machines, this means
that the set of all Turing computable functions from N to N is also enumer-
able.
On the other hand, the set of all functions from N to N is not enumerable.
This follows immediately from the fact that not even the set of all functions
of one argument from N to the set {0, 1} is enumerable. If all functions were
computable by some Turing machine we could enumerate the set of all func-
tions. So there are some functions that are not Turing-computable.

29.3 The Halting Problem


Assume we have fixed some finite descriptions of Turing machines. Using
these, we can enumerate Turing machines via their descriptions, say, ordered

Release: (None) ((None)) 403


CHAPTER 29. UNDECIDABILITY

by the lexicographic ordering. Each Turing machine thus receives an index: its
place in the enumeration M1 , M2 , M3 , . . . of Turing machine descriptions.
We know that there must be non-Turing-computable functions: the set of
Turing machine descriptions—and hence the set of Turing machines—is enu-
merable, but the set of all functions from N to N is not. But we can find
specific examples of non-computable function as well. One such function is
the halting function.

Definition 29.2 (Halting function). The halting function h is defined as


(
0 if machine Me does not halt for input n
h(e, n) =
1 if machine Me halts for input n

Definition 29.3 (Halting problem). The Halting Problem is the problem of de-
termining (for any m, w) whether the Turing machine Me halts for an input
of n strokes.

We show that h is not Turing-computable by showing that a related func-


tion, s, is not Turing-computable. This proof relies on the fact that anything
that can be computed by a Turing machine can be computed using just two
symbols: 0 and 1, and the fact that two Turing machines can be hooked to-
gether to create a single machine.

Definition 29.4. The function s is defined as


(
0 if machine Me does not halt for input e
s(e) =
1 if machine Me halts for input e

Lemma 29.5. The function s is not Turing computable.

Proof. We suppose, for contradiction, that the function s is Turing-computable.


Then there would be a Turing machine S that computes s. We may assume,
without loss of generality, that when S halts, it does so while scanning the first
square. This machine can be “hooked up” to another machine J, which halts if
it is started on a blank tape (i.e., if it reads 0 in the initial state while scanning
the square to the right of the end-of-tape symbol), and otherwise wanders off
to the right, never halting. S _ J, the machine created by hooking S to J,
is a Turing machine, so it is Me for some e (i.e., it appears somewhere in the
enumeration). Start Me on an input of e 1s. There are two possibilities: either
Me halts or it does not halt.

1. Suppose Me halts for an input of e 1s. Then s(e) = 1. So S, when started


on e, halts with a single 1 as output on the tape. Then J starts with a 1
on the tape. In that case J does not halt. But Me is the machine S _ J, so
it should do exactly what S followed by J would do. So Me cannot halt
for an input of e 1’s.

404 Release: (None) ((None))


29.4. THE DECISION PROBLEM

2. Now suppose Me does not halt for an input of e 1s. Then s(e) = 0, and
S, when started on input e, halts with a blank tape. J, when started on
a blank tape, immediately halts. Again, Me does what S followed by J
would do, so Me must halt for an input of e 1’s.

This shows there cannot be a Turing machine S: s is not Turing computable.

Theorem 29.6 (Unsolvability of the Halting Problem). The halting problem is


unsolvable, i.e., the function h is not Turing computable.

Proof. Suppose h were Turing computable, say, by a Turing machine H. We


could use H to build a Turing machine that computes s: First, make a copy of
the input (separated by a blank). Then move back to the beginning, and run
H. We can clearly make a machine that does the former, and if H existed, we
would be able to “hook it up” to such a modified doubling machine to get a
new machine which would determine if Me halts on input e, i.e., computes s.
But we’ve already shown that no such machine can exist. Hence, h is also not
Turing computable.

29.4 The Decision Problem


We say that first-order logic is decidable iff there is an effective method for
determining whether or not a given sentence is valid. As it turns out, there is
no such method: the problem of deciding validity of first-order sentences is
unsolvable.
In order to establish this important negative result, we prove that the de-
cision problem cannot be solved by a Turing machine. That is, we show that
there is no Turing machine which, whenever it is started on a tape that con-
tains a first-order sentence, eventually halts and outputs either 1 or 0 depend-
ing on whether the sentence is valid or not. By the Church-Turing thesis, ev-
ery function which is computable is Turing computable. So if if this “validity
function” were effectively computable at all, it would be Turing computable.
If it isn’t Turing computable, then, it also cannot be effectively computable.
Our strategy for proving that the decision problem is unsolvable is to re-
duce the halting problem to it. This means the following: We have proved that
the function h(e, w) that halts with output 1 if the Turing-machine described
by e halts on input w and outputs 0 otherwise, is not Turing-computable. We
will show that if there were a Turing machine that decides validity of first-
order sentences, then there is also Turing machine that computes h. Since h
cannot be computed by a Turing machine, there cannot be a Turing machine
that decides validity either.
The first step in this strategy is to show that for every input w and a Turing
machine M, we can effectively describe a sentence τ ( M, w) representing the

Release: (None) ((None)) 405


CHAPTER 29. UNDECIDABILITY

instruction set of M and the input w and a sentence α( M, w) expressing “M


eventually halts” such that:

 τ ( M, w) → α( M, w) iff M halts for input w.

The bulk of our proof will consist in describing these sentences τ ( M, w) and
α( M, w) and verifying that τ ( M, w) → α( M, w) is valid iff M halts on input w.

29.5 Representing Turing Machines


In order to represent Turing machines and their behavior by a sentence of
first-order logic, we have to define a suitable language. The language consists
of two parts: predicate symbols for describing configurations of the machine,
and expressions for numbering execution steps (“moments”) and positions on
the tape.
We introduce two kinds of predicate symbols, both of them 2-place: For
each state q, a predicate symbol Qq , and for each tape symbol σ, a predicate
symbol Sσ . The former allow us to describe the state of M and the position of
its tape head, the latter allow us to describe the contents of the tape.
In order to express the positions of the tape head and the number of steps
executed, we need a way to express numbers. This is done using a constant
symbol , and a 1-place function 0, the successor function. By convention it
is written after its argument (and we leave out the parentheses). So  names
the leftmost position on the tape as well as the time before the first execution
step (the initial configuration), 0 names the square to the right of the leftmost
square, and the time after the first execution step, and so on. We also introduce
a predicate symbol < to express both the ordering of tape positions (when it
means “to the left of”) and execution steps (then it means “before”).
Once we have the language in place, we list the “axioms” of τ ( M, w), i.e.,
the sentences which, taken together, describe the behavior of M when run on
input w. There will be sentences which lay down conditions on , 0, and <,
sentences that describes the input configuration, and sentences that describe
what the configuration of M is after it executes a particular instruction.

Definition 29.7. Given a Turing machine M = h Q, Σ, q0 , δi, the language L M


consists of:

1. A two-place predicate symbol Qq ( x, y) for every state q ∈ Q. Intu-


itively, Qq (m, n) expresses “after n steps, M is in state q scanning the
mth square.”

2. A two-place predicate symbol Sσ ( x, y) for every symbol σ ∈ Σ. Intu-


itively, Sσ (m, n) expresses “after n steps, the mth square contains sym-
bol σ.”

3. A constant symbol 

406 Release: (None) ((None))


29.5. REPRESENTING TURING MACHINES

4. A one-place function symbol 0

5. A two-place predicate symbol <

For each number n there is a canonical term n, the numeral for n, which
represents it in L M . 0 is , 1 is 0 , 2 is 00 , and so on. More formally:

0=
n + 1 = n0

The sentences describing the operation of the Turing machine M on input


w = σi1 . . . σik are the following:

1. Axioms describing numbers:

a) A sentence that says that the successor function is injective:

∀ x ∀y ( x 0 = y0 → x = y)

b) A sentence that says that every number is less than its successor:

∀x x < x0

c) A sentence that ensures that < is transitive:

∀ x ∀y ∀z (( x < y ∧ y < z) → x < z)

d) A sentence that connects < and =:

∀ x ∀y ( x < y → x 6= y)

2. Axioms describing the input configuration:

a) After after 0 steps—before the machine starts—M is in the inital


state q0 , scanning square 1:

Qq0 (1, 0)

b) The first k + 1 squares contain the symbols ., σi1 , . . . , σik :

S. (0, 0) ∧ Sσi (1, 0) ∧ · · · ∧ Sσi (n, 0)


1 k

c) Otherwise, the tape is empty:

∀ x (k < x → S0 ( x, 0))

Release: (None) ((None)) 407


CHAPTER 29. UNDECIDABILITY

3. Axioms describing the transition from one configuration to the next:


For the following, let ϕ( x, y) be the conjunction of all sentences of the
form
∀z (((z < x ∨ x < z) ∧ Sσ (z, y)) → Sσ (z, y0 ))
where σ ∈ Σ. We use ϕ(m, n) to express “other than at square m, the
tape after n + 1 steps is the same as after n steps.”

a) For every instruction δ(qi , σ) = hq j , σ0 , Ri, the sentence:

∀ x ∀y ((Qqi ( x, y) ∧ Sσ ( x, y)) →
(Qq j ( x 0 , y0 ) ∧ Sσ0 ( x, y0 ) ∧ ϕ( x, y)))

This says that if, after y steps, the machine is in state qi scanning
square x which contains symbol σ, then after y + 1 steps it is scan-
ning square x + 1, is in state q j , square x now contains σ0 , and every
square other than x contains the same symbol as it did after y steps.
b) For every instruction δ(qi , σ) = hq j , σ0 , Li, the sentence:

∀ x ∀y ((Qqi ( x 0 , y) ∧ Sσ ( x 0 , y)) →
(Qq j ( x, y0 ) ∧ Sσ0 ( x 0 , y0 ) ∧ ϕ( x, y))) ∧
∀y ((Qqi (, y) ∧ Sσ (, y)) →
(Qq j (, y0 ) ∧ Sσ0 (, y0 ) ∧ ϕ(, y)))

Take a moment to think about how this works: now we don’t start
with “if scanning square x . . . ” but: “if scanning square x + 1 . . . ” A
move to the left means that in the next step the machine is scanning
square x. But the square that is written on is x + 1. We do it this
way since we don’t have subtraction or a predecessor function.
Note that numbers of the form x + 1 are 1, 2, . . . , i.e., this doesn’t
cover the case where the machine is scanning square 0 and is sup-
posed to move left (which of course it can’t—it just stays put). That
special case is covered by the second conjunction: it says that if, af-
ter y steps, the machine is scanning square 0 in state qi and square 0
contains symbol σ, then after y + 1 steps it’s still scanning square 0,
is now in state q j , the symbol on square 0 is σ0 , and the squares
other than square 0 contain the same symbols they contained ofter
y steps.
c) For every instruction δ(qi , σ ) = hq j , σ0 , N i, the sentence:

∀ x ∀y ((Qqi ( x, y) ∧ Sσ ( x, y)) →
(Qq j ( x, y0 ) ∧ Sσ0 ( x, y0 ) ∧ ϕ( x, y)))

408 Release: (None) ((None))


29.6. VERIFYING THE REPRESENTATION

Let τ ( M, w) be the conjunction of all the above sentences for Turing machine M
and input w
In order to express that M eventually halts, we have to find a sentence that
says “after some number of steps, the transition function will be undefined.”
Let X be the set of all pairs hq, σi such that δ(q, σ ) is undefined. Let α( M, w)
then be the sentence
_
∃ x ∃y ( (Qq ( x, y) ∧ Sσ ( x, y)))
hq,σi∈ X

If we use a Turing machine with a designated halting state h, it is even


easier: then the sentence α( M, w)

∃ x ∃y Qh ( x, y)
expresses that the machine eventually halts.
Proposition 29.8. If m < k, then τ ( M, w)  m < k

Proof. Exercise.

29.6 Verifying the Representation


In order to verify that our representation works, we have to prove two things.
First, we have to show that if M halts on input w, then τ ( M, w) → α( M, w) is
valid. Then, we have to show the converse, i.e., that if τ ( M, w) → α( M, w) is
valid, then M does in fact eventually halt when run on input w.
The strategy for proving these is very different. For the first result, we have
to show that a sentence of first-order logic (namely, τ ( M, w) → α( M, w)) is
valid. The easiest way to do this is to give a derivation. Our proof is supposed
to work for all M and w, though, so there isn’t really a single sentence for
which we have to give a derivation, but infinitely many. So the best we can do
is to prove by induction that, whatever M and w look like, and however many
steps it takes M to halt on input w, there will be a derivation of τ ( M, w) →
α( M, w).
Naturally, our induction will proceed on the number of steps M takes be-
fore it reaches a halting configuration. In our inductive proof, we’ll estab-
lish that for each step n of the run of M on input w, τ ( M, w)  χ( M, w, n),
where χ( M, w, n) correctly describes the configuration of M run on w after n
steps. Now if M halts on input w after, say, n steps, χ( M, w, n) will describe
a halting configuration. We’ll also show that χ( M, w, n)  α( M, w), when-
ever χ( M, w, n) describes a halting configuration. So, if M halts on input w,
then for some n, M will be in a halting configuration after n steps. Hence,
τ ( M, w)  χ( M, w, n) where χ( M, w, n) describes a halting configuration, and
since in that case χ( M, w, n)  α( M, w), we get that T ( M, w)  α( M, w), i.e.,
that  τ ( M, w) → α( M, w).

Release: (None) ((None)) 409


CHAPTER 29. UNDECIDABILITY

The strategy for the converse is very different. Here we assume that 
τ ( M, w) → α( M, w) and have to prove that M halts on input w. From the hy-
pothesis we get that τ ( M, w)  α( M, w), i.e., α( M, w) is true in every structure
in which τ ( M, w) is true. So we’ll describe a structure M in which τ ( M, w)
is true: its domain will be N, and the interpretation of all the Qq and Sσ
will be given by the configurations of M during a run on input w. So, e.g.,
M  Qq (m, n) iff T, when run on input w for n steps, is in state q and scan-
ning square m. Now since τ ( M, w)  α( M, w) by hypothesis, and since M 
τ ( M, w) by construction, M  α( M, w). But M  α( M, w) iff there is some
n ∈ |M| = N so that M, run on input w, is in a halting configuration after n
steps.

Definition 29.9. Let χ( M, w, n) be the sentence

Qq (m, n) ∧ Sσ0 (0, n) ∧ · · · ∧ Sσk (k, n) ∧ ∀ x (k < x → S0 ( x, n))

where q is the state of M at time n, M is scanning square m at time n, square i


contains symbol σi at time n for 0 ≤ i ≤ k and k is the right-most non-blank
square of the tape at time 0, or the right-most square the tape head has visited
after n steps, whichever is greater.

Lemma 29.10. If M run on input w is in a halting configuration after n steps, then


χ( M, w, n)  α( M, w).

Proof. Suppose that M halts for input w after n steps. There is some state q,
square m, and symbol σ such that:

1. After n steps, M is in state q scanning square m on which σ appears.

2. The transition function δ(q, σ ) is undefined.

χ( M, w, n) is the description of this configuration and will include the clauses


Qq (m, n) and Sσ (m, n). These clauses together imply α( M, w):
_
∃ x ∃y ( (Qq ( x, y) ∧ Sσ ( x, y)))
hq,σi∈ X

since Qq0 (m, n) ∧ Sσ0 (m, n)  ∧ Sσ (m, n)), as hq0 , σ0 i ∈ X.


W
hq,σi∈ X (Qq ( m, n )

So if M halts for input w, then there is some n such that χ( M, w, n) 


α( M, w). We will now show that for any time n, τ ( M, w)  χ( M, w, n).

Lemma 29.11. For each n, if M has not halted after n steps, τ ( M, w)  χ( M, w, n).

Proof. Induction basis: If n = 0, then the conjuncts of χ( M, w, 0) are also con-


juncts of τ ( M, w), so entailed by it.

410 Release: (None) ((None))


29.6. VERIFYING THE REPRESENTATION

Inductive hypothesis: If M has not halted before the nth step, then τ ( M, w) 
χ( M, w, n). We have to show that (unless χ( M, w, n) describes a halting con-
figuration), τ ( M, w)  χ( M, w, n + 1).
Suppose n > 0 and after n steps, M started on w is in state q scanning
square m. Since M does not halt after n steps, there must be an instruction of
one of the following three forms in the program of M:

1. δ(q, σ) = hq0 , σ0 , Ri

2. δ(q, σ) = hq0 , σ0 , Li

3. δ(q, σ ) = hq0 , σ0 , N i

We will consider each of these three cases in turn.

1. Suppose there is an instruction of the form ??. By ??, ??, this means that

∀ x ∀y ((Qq ( x, y) ∧ Sσ ( x, y)) →
(Qq0 ( x 0 , y0 ) ∧ Sσ0 ( x, y0 ) ∧ ϕ( x, y)))

is a conjunct of τ ( M, w). This entails the following sentence (universal


instantiation, m for x and n for y):

(Qq (m, n) ∧ Sσ (m, n)) →


(Qq0 (m0 , n0 ) ∧ Sσ0 (m, n0 ) ∧ ϕ(m, n)).

By induction hypothesis, τ ( M, w)  χ( M, w, n), i.e.,

Qq (m, n) ∧ Sσ0 (0, n) ∧ · · · ∧ Sσk (k, n) ∧ ∀ x (k < x → S0 ( x, n))

Since after n steps, tape square m contains σ, the corresponding conjunct


is Sσ (m, n), so this entails:

Qq (m, n) ∧ Sσ (m, n))

We now get

Qq0 (m0 , n0 ) ∧ Sσ0 (m, n0 ) ∧


Sσ0 (0, n0 ) ∧ · · · ∧ Sσk (k, n0 ) ∧
∀ x (k < x → S0 ( x, n0 ))

as follows: The first line comes directly from the consequent of the pre-
ceding conditional, by modus ponens. Each conjunct in the middle
line—which excludes Sσm (m, n0 )—follows from the corresponding con-
junct in χ( M, w, n) together with ϕ(m, n).

Release: (None) ((None)) 411


CHAPTER 29. UNDECIDABILITY

If m < k, τ ( M, w) ` m < k (??) and by transitivity of <, we have ∀ x (k <


x → m < x ). If m = k, then ∀ x (k < x → m < x ) by logic alone. The last
line then follows from the corresponding conjunct in χ( M, w, n), ∀ x (k <
x → m < x ), and ϕ(m, n). If m < k, this already is χ( M, w, n + 1).
Now suppose m = k. In that case, after n + 1 steps, the tape head has
also visited square k + 1, which now is the right-most square visited.
0
So χ( M, w, n + 1) has a new conjunct, S0 (k , n0 ), and the last conjuct is
0
∀ x (k < x → S0 ( x, n0 )). We have to verify that these two sentences are
also implied.
We already have ∀ x (k < x → S0 ( x, n0 )). In particular, this gives us k <
0
k0 → S0 (k , n0 ). From the axiom ∀ x x < x 0 we get k < k0 . By modus
0
ponens, S0 (k , n0 ) follows.
0
Also, since τ ( M, w) ` k < k , the axiom for transitivity of < gives us
0
∀ x (k < x → S0 ( x, n0 )). (We leave the verification of this as an exercise.)
2. Suppose there is an instruction of the form ??. Then, by ??, ??,
∀ x ∀y ((Qq ( x 0 , y) ∧ Sσ ( x 0 , y)) →
(Qq0 ( x, y0 ) ∧ Sσ0 ( x 0 , y0 ) ∧ ϕ( x, y))) ∧
∀y ((Qqi (, y) ∧ Sσ (, y)) →
(Qq j (, y0 ) ∧ Sσ0 (, y0 ) ∧ ϕ(, y)))

is a conjunct of τ ( M, w). If m > 0, then let l = m − 1 (i.e., m = l + 1).


The first conjunct of the above sentence entails the following:
0 0
(Qq (l , n) ∧ Sσ (l , n)) →
0
(Qq0 (l, n0 ) ∧ Sσ0 (l , n0 ) ∧ ϕ(l, n))

Otherwise, let l = m = 0 and consider the following sentence entailed


by the second conjunct:

((Qqi (, n) ∧ Sσ (, n)) →


(Qq j (, n0 ) ∧ Sσ0 (, n0 ) ∧ ϕ(, n)))

Either sentence implies

Qq0 (l, n0 ) ∧ Sσ0 (m, n0 ) ∧


Sσ0 (0, n0 ) ∧ · · · ∧ Sσk (k, n0 ) ∧
∀ x (k < x → S0 ( x, n0 ))
0
as before. (Note that in the first case, l = m and in the second case
l = .) But this just is χ( M, w, n + 1).

412 Release: (None) ((None))


29.7. THE DECISION PROBLEM IS UNSOLVABLE

3. Case ?? is left as an exercise.

We have shown that for any n, τ ( M, w)  χ( M, w, n).

Lemma 29.12. If M halts on input w, then τ ( M, w) → α( M, w) is valid.

Proof. By ??, we know that, for any time n, the description χ( M, w, n) of the
configuration of M at time n is entailed by τ ( M, w). Suppose M halts after k
steps. It will be scanning square m, say. Then χ( M, w, k) describes a halting
configuration of M, i.e., it contains as conjuncts both Qq (m, k) and Sσ (m, k)
with δ(q, σ ) undefined. By ?? Thus, χ( M, w, k )  α( M, w). But since ( M, w) 
χ( M, w, k), we have τ ( M, w)  α( M, w) and therefore τ ( M, w) → α( M, w) is
valid.

To complete the verification of our claim, we also have to establish the


reverse direction: if τ ( M, w) → α( M, w) is valid, then M does in fact halt when
started on input m.

Lemma 29.13. If  τ ( M, w) → α( M, w), then M halts on input w.

Proof. Consider the L M -structure M with domain N which interprets  as 0,


0 as the successor function, and < as the less-than relation, and the predicates

Qq and Sσ as follows:

started on w, after n steps,


QM
q = {h m, n i : }
M is in state q scanning square m
started on w, after n steps,
SσM = {hm, ni : }
square m of M contains symbol σ

In other words, we construct the structure M so that it describes what M


started on input w actually does, step by step. Clearly, M  τ ( M, w). If
 τ ( M, w) → α( M, w), then also M  α( M, w), i.e.,
_
M  ∃ x ∃y ( (Qq ( x, y) ∧ Sσ ( x, y))).
hq,σi∈ X

As |M| = N, there must be m, n ∈ N so that M  Qq (m, n) ∧ Sσ (m, n) for


some q and σ such that δ(q, σ ) is undefined. By the definition of M, this means
that M started on input w after n steps is in state q and reading symbol σ, and
the transition function is undefined, i.e., M has halted.

29.7 The Decision Problem is Unsolvable


Theorem 29.14. The decision problem is unsolvable.

Release: (None) ((None)) 413


CHAPTER 29. UNDECIDABILITY

Proof. Suppose the decision problem were solvable, i.e., suppose there were
a Turing machine D of the following sort. Whenever D is started on a tape
that contains a sentence ψ of first-order logic as input, D eventually halts,
and outputs 1 iff ψ is valid and 0 otherwise. Then we could solve the halt-
ing problem as follows. We construct a Turing machine E that, given as input
the number e of Turing machine Me and input w, computes the correspond-
ing sentence τ ( Me , w) → α( Me , w) and halts, scanning the leftmost square on
the tape. The machine E _ D would then, given input e and w, first com-
pute τ ( Me , w) → α( Me , w) and then run the decision problem machine D on
that input. D halts with output 1 iff τ ( Me , w) → α( Me , w) is valid and out-
puts 0 otherwise. By ?? and ??, τ ( Me , w) → α( Me , w) is valid iff Me halts on
input w. Thus, E _ D, given input e and w halts with output 1 iff Me halts
on input w and halts with output 0 otherwise. In other words, E _ D would
solve the halting problem. But we know, by ??, that no such Turing machine
can exist.

Problems
Problem 29.1. The Three Halting (3-Halt) problem is the problem of giving a
decision procedure to determine whether or not an arbitrarily chosen Turing
Machine halts for an input of three strokes on an otherwise blank tape. Prove
that the 3-Halt problem is unsolvable.

Problem 29.2. Show that if the halting problem is solvable for Turing machine
and input pairs Me and n where e 6= n, then it is also solvable for the cases
where e = n.

Problem 29.3. We proved that the halting problem is unsolvable if the input
is a number e, which identifies a Turing machine Me via an enumaration of
all Turing machines. What if we allow the description of Turing machines
from ?? directly as input? (This would require a larger alphabet of course.)
Can there be a Turing machine which decides the halting problem but takes
as input descriptions of Turing machines rather than indices? Explain why or
why not.

Problem 29.4. Prove ??. (Hint: use induction on k − m).

Problem 29.5. Complete case ?? of the proof of ??.

Problem 29.6. Give a derivation of Sσi (i, n0 ) from Sσi (i, n) and ϕ(m, n) (as-
suming i 6= m, i.e., either i < m or m < i).
0
Problem 29.7. Give a derivation of ∀ x (k < x → S0 ( x, n0 )) from ∀ x (k < x →
S0 ( x, n0 )), ∀ x x < x 0 , and ∀ x ∀y ∀z (( x < y ∧ y < z) → x < z).)

414 Release: (None) ((None))


Part VII

Incompleteness

415
CHAPTER 29. UNDECIDABILITY

Material in this part covers the incompleteness theorems. It depends


on material in the parts on first-order logic (esp., the proof system), the
material on recursive functions (in the computability part). It is based on
Jeremy Avigad’s notes with revisions by Richard Zach.

416 Release: (None) ((None))


Chapter 30

Introduction to Incompleteness

30.1 Historical Background


In this section, we will briefly discuss historical developments that will help
put the incompleteness theorems in context. In particular, we will give a very
sketchy overview of the history of mathematical logic; and then say a few
words about the history of the foundations of mathematics.
The phrase “mathematical logic” is ambiguous. One can interpret the
word “mathematical” as describing the subject matter, as in, “the logic of
mathematics,” denoting the principles of mathematical reasoning; or as de-
scribing the methods, as in “the mathematics of logic,” denoting a mathemat-
ical study of the principles of reasoning. The account that follows involves
mathematical logic in both senses, often at the same time.
The study of logic began, essentially, with Aristotle, who lived approxi-
mately 384–322 BCE. His Categories, Prior analytics, and Posterior analytics in-
clude systematic studies of the principles of scientific reasoning, including a
thorough and systematic study of the syllogism.
Aristotle’s logic dominated scholastic philosophy through the middle ages;
indeed, as late as eighteenth century Kant maintained that Aristotle’s logic
was perfect and in no need of revision. But the theory of the syllogism is far
too limited to model anything but the most superficial aspects of mathemati-
cal reasoning. A century earlier, Leibniz, a contemporary of Newton’s, imag-
ined a complete “calculus” for logical reasoning, and made some rudimentary
steps towards designing such a calculus, essentially describing a version of
propositional logic.
The nineteenth century was a watershed for logic. In 1854 George Boole
wrote The Laws of Thought, with a thorough algebraic study of propositional
logic that is not far from modern presentations. In 1879 Gottlob Frege pub-
lished his Begriffsschrift (Concept writing) which extends propositional logic
with quantifiers and relations, and thus includes first-order logic. In fact,
Frege’s logical systems included higher-order logic as well, and more. In his

417
CHAPTER 30. INTRODUCTION TO INCOMPLETENESS

Basic Laws of Arithmetic, Frege set out to show that all of arithmetic could be
derived in his Begriffsschrift from purely logical assumption. Unfortunately,
these assumptions turned out to be inconsistent, as Russell showed in 1902.
But setting aside the inconsistent axiom, Frege more or less invented mod-
ern logic singlehandedly, a startling achievement. Quantificational logic was
also developed independently by algebraically-minded thinkers after Boole,
including Peirce and Schröder.
Let us now turn to developments in the foundations of mathematics. Of
course, since logic plays an important role in mathematics, there is a good deal
of interaction with the developments I just described. For example, Frege de-
veloped his logic with the explicit purpose of showing that all of mathematics
could be based solely on his logical framework; in particular, he wished to
show that mathematics consists of a priori analytic truths instead of, as Kant
had maintained, a priori synthetic ones.
Many take the birth of mathematics proper to have occurred with the
Greeks. Euclid’s Elements, written around 300 B.C., is already a mature rep-
resentative of Greek mathematics, with its emphasis on rigor and precision.
The definitions and proofs in Euclid’s Elements survive more or less in tact
in high school geometry textbooks today (to the extent that geometry is still
taught in high schools). This model of mathematical reasoning has been held
to be a paradigm for rigorous argumentation not only in mathematics but in
branches of philosophy as well. (Spinoza even presented moral and religious
arguments in the Euclidean style, which is strange to see!)
Calculus was invented by Newton and Leibniz in the seventeenth century.
(A fierce priority dispute raged for centuries, but most scholars today hold
that the two developments were for the most part independent.) Calculus in-
volves reasoning about, for example, infinite sums of infinitely small quanti-
ties; these features fueled criticism by Bishop Berkeley, who argued that belief
in God was no less rational than the mathematics of his time. The methods of
calculus were widely used in the eighteenth century, for example by Leonhard
Euler, who used calculations involving infinite sums with dramatic results.
In the nineteenth century, mathematicians tried to address Berkeley’s crit-
icisms by putting calculus on a firmer foundation. Efforts by Cauchy, Weier-
strass, Bolzano, and others led to our contemporary definitions of limits, con-
tinuity, differentiation, and integration in terms of “epsilons and deltas,” in
other words, devoid of any reference to infinitesimals. Later in the century,
mathematicians tried to push further, and explain all aspects of calculus, in-
cluding the real numbers themselves, in terms of the natural numbers. (Kro-
necker: “God created the whole numbers, all else is the work of man.”) In
1872, Dedekind wrote “Continuity and the irrational numbers,” where he
showed how to “construct” the real numbers as sets of rational numbers (which,
as you know, can be viewed as pairs of natural numbers); in 1888 he wrote
“Was sind und was sollen die Zahlen” (roughly, “What are the natural num-

418 Release: (None) ((None))


30.1. HISTORICAL BACKGROUND

bers, and what should they be?”) which aimed to explain the natural numbers
in purely “logical” terms. In 1887 Kronecker wrote “Über den Zahlbegriff”
(“On the concept of number”) where he spoke of representing all mathemati-
cal object in terms of the integers; in 1889 Giuseppe Peano gave formal, sym-
bolic axioms for the natural numbers.
The end of the nineteenth century also brought a new boldness in dealing
with the infinite. Before then, infinitary objects and structures (like the set of
natural numbers) were treated gingerly; “infinitely many” was understood
as “as many as you want,” and “approaches in the limit” was understood as
“gets as close as you want.” But Georg Cantor showed that it was possible to
take the infinite at face value. Work by Cantor, Dedekind, and others help to
introduce the general set-theoretic understanding of mathematics that is now
widely accepted.
This brings us to twentieth century developments in logic and founda-
tions. In 1902 Russell discovered the paradox in Frege’s logical system. In 1904
Zermelo proved Cantor’s well-ordering principle, using the so-called “axiom
of choice”; the legitimacy of this axiom prompted a good deal of debate. Be-
tween 1910 and 1913 the three volumes of Russell and Whitehead’s Principia
Mathematica appeared, extending the Fregean program of establishing mathe-
matics on logical grounds. Unfortunately, Russell and Whitehead were forced
to adopt two principles that seemed hard to justify as purely logical: an axiom
of infinity and an axiom of “reducibility.” In the 1900’s Poincaré criticized the
use of “impredicative definitions” in mathematics, and in the 1910’s Brouwer
began proposing to refound all of mathematics in an “intuitionistic” basis,
which avoided the use of the law of the excluded middle (ϕ ∨ ¬ ϕ).
Strange days indeed! The program of reducing all of mathematics to logic
is now referred to as “logicism,” and is commonly viewed as having failed,
due to the difficulties mentioned above. The program of developing mathe-
matics in terms of intuitionistic mental constructions is called “intuitionism,”
and is viewed as posing overly severe restrictions on everyday mathemat-
ics. Around the turn of the century, David Hilbert, one of the most influen-
tial mathematicians of all time, was a strong supporter of the new, abstract
methods introduced by Cantor and Dedekind: “no one will drive us from the
paradise that Cantor has created for us.” At the same time, he was sensitive
to foundational criticisms of these new methods (oddly enough, now called
“classical”). He proposed a way of having one’s cake and eating it too:

1. Represent classical methods with formal axioms and rules; represent


mathematical questions as formulas in an axiomatic system.

2. Use safe, “finitary” methods to prove that these formal deductive sys-
tems are consistent.

Hilbert’s work went a long way toward accomplishing the first goal. In
1899, he had done this for geometry in his celebrated book Foundations of ge-

Release: (None) ((None)) 419


CHAPTER 30. INTRODUCTION TO INCOMPLETENESS

ometry. In subsequent years, he and a number of his students and collabo-


rators worked on other areas of mathematics to do what Hilbert had done
for geometry. Hilbert himself gave axiom systems for arithmetic and analy-
sis. Zermelo gave an axiomatization of set theory, which was expanded on by
Fraenkel, Skolem, von Neumann, and others. By the mid-1920s, there were
two approaches that laid claim to the title of an axiomatization of “all” of
mathematics, the Principia mathematica of Russell and Whitehead, and what
came to be known as Zermelo-Fraenkel set theory.
In 1921, Hilbert set out on a research project to establish the goal of proving
these systems to be consistent. He was aided in this project by several of
his students, in particular Bernays, Ackermann, and later Gentzen. The basic
idea for accomplishing this goal was to cast the question of the possibility of
a derivation of an inconsistency in mathmatics as a combinatorial problem
about possible sequences of symbols, namely possible sequences of sentences
which meet the criterion of being a correct derivation of, say, ϕ ∧ ¬ ϕ from
the axioms of an axiom system for arithmetic, analysis, or set theory. A proof
of the impossibility of such a sequence of symbols would—since it is itself
a mathematical proof—be formalizable in these axiomatic systems. In other
words, there would be some sentence Con which states that, say, arithmetic
is consistent. Moreover, this sentence should be provable in the systems in
question, especially if its proof requires only very restricted, “finitary” means.
The second aim, that the axiom systems developed would settle every
mathematical question, can be made precise in two ways. In one way, we can
formulate it as follows: For any sentence ϕ in the language of an axiom system
for mathematics, either ϕ or ¬ ϕ is provable from the axioms. If this were true,
then there would be no sentences which can neither be proved nor refuted
on the basis of the axioms, no questions which the axioms do not settle. An
axiom system with this property is called complete. Of course, for any given
sentence it might still be a difficult task to determine which of the two alter-
natives holds. But in principle there should be a method to do so. In fact, for
the axiom and derivation systems considered by Hilbert, completeness would
imply that such a method exists—although Hilbert did not realize this. The
second way to interpret the question would be this stronger requirement: that
there be a mechanical, computational method which would determine, for a
given sentence ϕ, whether it is derivable from the axioms or not.
In 1931, Gödel proved the two “incompleteness theorems,” which showed
that this program could not succeed. There is no axiom system for mathemat-
ics which is complete, specifically, the sentence that expresses the consistency
of the axioms is a sentence which can neither be proved nor refuted.
This struck a lethal blow to Hilbert’s original program. However, as is so
often the case in mathematics, it also opened up exciting new avenues for re-
search. If there is no one, all-encompassing formal system of mathematics, it
makes sense to develop more circumscribesd systems and investigate what

420 Release: (None) ((None))


30.2. DEFINITIONS

can be proved in them. It also makes sense to develop less restricted methods
of proof for establishing the consistency of these systems, and to find ways to
measure how hard it is to prove their consistency. Since Gödel showed that
(almost) every formal system has questions it cannot settle, it makes sense to
look for “interesting” questions a given formal system cannot settle, and to
figure out how strong a formal system has to be to settle them. To the present
day, logicians have been pursuing these questions in a new mathematical dis-
cipline, the theory of proofs.

30.2 Definitions
In order to carry out Hilbert’s project of formalizing mathematics and show-
ing that such a formalization is consistent and complete, the first order of busi-
ness would be that of picking a language, logical framework, and a system of
axioms. For our purposes, let us suppose that mathematics can be formalized
in a first-order language, i.e., that there is some set of constant symbols, func-
tion symbols, and predicate symbols which, together with the connectives and
quatifiers of first-order logic, allow us to express the claims of mathematics.
Most people agree that such a language exists: the language of set theory, in
which ∈ is the only non-logical symbol. That such a simple language is so
expressive is of course a very implausible claim at first sight, and it took a
lot of work to establish that practically of all mathematics can be expressed
in this very austere vocabulary. To keep things simple, for now, let’s restrict
our discussion to arithmetic, so the part of mathematics that just deals with
the natural numbers N. The natural language in which to express facts of
arithmetic is L A . L A contains a single two-place predicate symbol <, a sin-
gle constant symbol , one one-place function symbol 0, and two two-place
function symbols + and ×.

Definition 30.1. A set of sentences Γ is a theory if it is closed under entailment,


i.e., if Γ = { ϕ : Γ  ϕ}.

There are two easy ways to specify theories. One is as the set of sentences
true in some structure. For instance, consider the structure for L A in which
the domain is N and all non-logical symbols are interpreted as you would
expect.

Definition 30.2. The standard model of arithmetic is the structure N defined as


follows:

1. |N| = N

2. N = 0

3. 0N (n) = n + 1 for all n ∈ N

Release: (None) ((None)) 421


CHAPTER 30. INTRODUCTION TO INCOMPLETENESS

4. +N (n, m) = n + m for all n, m ∈ N


5. ×N (n, m) = n · m for all n, m ∈ N
6. <N = {hn, mi : n ∈ N, m ∈ N, n < m}
Definition 30.3. The theory of true arithmetic is the set of sentences satisfied in
the standard model of arithmetic, i.e.,
TA = { ϕ : N  ϕ}.
TA is a theory, for whenever TA  ϕ, ϕ is satisfied in every structure which
satisfies TA. Since M  TA, M  ϕ, and so ϕ ∈ TA.
The other way to specify a theory Γ is as the set of sentences entailed by
some set of sentences Γ0 . In that case, Γ is the “closure” of Γ0 under entailment.
Specifying a theory this way is only interesting if Γ0 is explicitly specified, e.g.,
if the elements of Γ0 are listed. At the very least, Γ0 has to be decidable, i.e.,
there has to be a computable test for when a sentence counts as an element
of Γ0 or not. We call the sentences in Γ0 axioms for Γ, and Γ axiomatized by Γ0 .
Definition 30.4. A theory Γ is axiomatized by Γ0 iff
Γ = { ϕ : Γ0  ϕ}
Definition 30.5. The theory Q axiomatized by the following sentences is known
as “Robinson’s Q” and is a very simple theory of arithmetic.
∀ x ∀y ( x 0 = y0 → x = y) (Q1 )
∀ x  6= x0 (Q2 )
∀ x ( x 6=  → ∃y x = y0 ) (Q3 )
∀ x ( x + ) = x (Q4 )
∀ x ∀y ( x + y0 ) = ( x + y)0 (Q5 )
∀ x ( x × ) =  (Q6 )
∀ x ∀y ( x × y0 ) = (( x × y) + x ) (Q7 )
∀ x ∀y ( x < y ↔ ∃z ( x + z0 = y)) (Q8 )
The set of sentences { Q1 , . . . , Q8 } are the axioms of Q, so Q consists of all
sentences entailed by them:
Q = { ϕ : { Q1 , . . . , Q8 }  ϕ }.
Definition 30.6. Suppose ϕ( x ) is a formula in L A with free variables x and y1 ,
. . . , yn . Then any sentence of the form
∀y1 . . . ∀yn (( ϕ() ∧ ∀ x ( ϕ( x ) → ϕ( x 0 ))) → ∀ x ϕ( x ))
is an instance of the induction schema.
Peano arithmetic PA is the theory axiomatized by the axioms of Q together
with all instances of the induction schema.

422 Release: (None) ((None))


30.2. DEFINITIONS

Every instance of the induction schema is true in N. This is easiest to see


if the formula ϕ only has one free variable x. Then ϕ( x ) defines a subset X A
of N in N. X A is the set of all n ∈ N such that N, s  ϕ( x ) when s( x ) = n. The
corresponding instance of the induction schema is

(( ϕ() ∧ ∀ x ( ϕ( x ) → ϕ( x 0 ))) → ∀ x ϕ( x ))

If its antecedent is true in N, then 0 ∈ X A and, whenever n ∈ X A , so is n + 1.


Since 0 ∈ X A , we get 1 ∈ X A . With 1 ∈ X A we get 2 ∈ X A . And so on. So for
every n ∈ N, n ∈ X A . But this means that ∀ x ϕ( x ) is satisfied in N.
Both Q and PA are axiomatized theories. The big question is, how strong
are they? For instance, can PA prove all the truths about N that can be ex-
pressed in L A ? Specifically, do the axioms of PA settle all the questions that
can be formulated in L A ?
Another way to put this is to ask: Is PA = TA? For TA obviously does
prove (i.e., it includes) all the truths about N, and it settles all the questions
that can be formulated in L A , since if ϕ is a sentence in L A , then either N  ϕ
or N  ¬ ϕ, and so either TA  ϕ or TA  ¬ ϕ. Call such a theory complete.

Definition 30.7. A theory Γ is complete iff for every sentence ϕ in its language,
either Γ  ϕ or Γ  ¬ ϕ.

By the Completeness Theorem, Γ  ϕ iff Γ ` ϕ, so Γ is complete iff for


every sentence ϕ in its language, either Γ ` ϕ or Γ ` ¬ ϕ.
Another question we are led to ask is this: Is there a computational pro-
cedure we can use to test if a sentence is in TA, in PA, or even just in Q? We
can make this more precise by defining when a set (e.g., a set of sentences) is
decidable.

Definition 30.8. A set X is decidable iff its characteristic function χ X : X →


{0, 1}, with
(
1 if x ∈ X
χX (x) =
0 if x ∈ /X

is computable.

So our question becomes: Is TA (PA, Q) decidable?


The answer to all these questions will be: no. None of these theories are
decidable. However, this phenomenon is not specific to these particular theo-
ries. In fact, any theory that satisfies certain conditions is subject to the same
results. One of these conditions, which Q and PA satisfy, is that they are ax-
iomatized by a decidable set of axioms.

Definition 30.9. A theory is axiomatizable if it is axiomatized by a decidable


set of axioms.

Release: (None) ((None)) 423


CHAPTER 30. INTRODUCTION TO INCOMPLETENESS

Example 30.10. Any theory axiomatized by a finite set of sentences is axioma-


tizable, since any finite set is decidable. Thus, Q, for instance, is axiomatizable.
Schematically axiomatized theories like PA are also axiomatizable. For to
test if ψ is among the axioms of PA, i.e., to compute the function χ X where
χ X (ψ) = 1 if ψ is an axiom of PA and = 0 otherwise, we can do the following:
First, check if ψ is one of the axioms of Q. If it is, the answer is “yes” and the
value of χ X (ψ) = 1. If not, test if it is an instance of the induction schema. This
can be done systematically; in this case, perhaps it’s easiest to see that it can be
done as follows: Any instance of the induction schema begins with a number
of universal quantifiers, and then a sub-formula that is a conditional. The
consequent of that conditional is ∀ x ϕ( x, y1 , . . . , yn ) where x and y1 , . . . , yn are
all the free variables of ϕ and the initial quantifiers of ψ bind the variables y1 ,
. . . , yn . Once we have extracted this ϕ and checked that its free variables match
the variables bound by the universal qauntifiers at the front and ∀ x, we go on
to check that the antecedent of the conditional matches

ϕ(, y1 , . . . , yn ) ∧ ∀ x ( ϕ( x, y1 , . . . , yn ) → ϕ( x 0 , y1 , . . . , yn ))

Again, if it does, ψ is an instance of the induction schema, and if it doesn’t, ψ


isn’t.

In answering this question—and the more general question of which theo-


ries are complete or decidable—it will be useful to consider also the following
definition. Recall that a set X is enumerable iff it is empty or if there is a sur-
jective function f : N → X. Such a function is called an enumeration of X.

Definition 30.11. A set X is called computably enumerable (c.e. for short) iff it
is empty or it has a computable enumeration.

In addition to axiomatizability, another condition on theories to which the


incompleteness theorems apply will be that they are strong enough to prove
basic facts about computable functions and decidable relations. By “basic
facts,” we mean sentences which express what the values of computable func-
tions are for each of their arguments. And by “strong enough” we mean that
the theories in question count these sentences among its theorems. For in-
stance, consider a prototypical computable function: addition. The value of
+ for arguments 2 and 3 is 5, i.e., 2 + 3 = 5. A sentence in the language of
arithmetic that expresses that the value of + for arguments 2 and 3 is 5 is:
(2 + 3) = 5. And, e.g., Q proves this sentence. More generally, we would
like there to be, for each computable function f ( x1 , x2 ) a formula ϕ f ( x1 , x2 , y)
in L A such that Q ` ϕ f (n1 , n2 , m) whenever f (n1 , n2 ) = m. In this way, Q
proves that the value of f for arguments n1 , n2 is m. In fact, we require that
it proves a bit more, namely that no other number is the value of f for argu-
ments n1 , n2 . And the same goes for decidable relations. This is made precise
in the following two definitions.

424 Release: (None) ((None))


30.3. OVERVIEW OF INCOMPLETENESS RESULTS

Definition 30.12. A formula ϕ( x1 , . . . , xk , y) represents the function f : Nk →


N in Γ iff whenever f (n1 , . . . , nk ) = m, then

1. Γ ` ϕ(n1 , . . . , nk , m), and

2. Γ ` ∀y( ϕ(n1 , . . . , nk , y) → y = m).

Definition 30.13. A formula ϕ( x1 , . . . , xk ) represents the relation R ⊆ Nk iff,

1. whenever R(n1 , . . . , nk ), Γ ` ϕ(n1 , . . . , nk ), and

2. whenever not R(n1 , . . . , nk ), Γ ` ¬ ϕ(n1 , . . . , nk ).

A theory is “strong enough” for the incompleteness theorems to apply if


it represents all computable functions and all decidable relations. Q and its
extensions satisfy this condition, but it will take us a while to establish this—
it’s a non-trivial fact about the kinds of things Q can prove, and it’s hard
to show because Q has only a few axioms from which we’ll have to prove
all these facts. However, Q is a very weak theory. So although it’s hard to
prove that Q represents all computable functions, most interesting theories
are stronger than Q, i.e., prove more than Q does. And if Q proves some-
thing, any stronger theory does; since Q represents all computable functions,
every stronger theory does. This means that many interesting theories meet
this condition of the incompleteness theorems. So our hard work will pay
off, since it shows that the incompletess theorems apply to a wide range of
theories. Certainly, any theory aiming to formalize “all of mathematics” must
prove everything that Q proves, since it should at the very least be able to cap-
ture the results of elementary computations. So any theory that is a candidate
for a theory of “all of mathematics” will be one to which the incompleteness
theorems apply.

30.3 Overview of Incompleteness Results


Hilbert expected that mathematics could be formalized in an axiomatizable
theory which it would be possible to prove complete and decidable. More-
over, he aimed to prove the consistency of this theory with very weak, “fini-
tary,” means, which would defend classical mathematics agianst the chal-
lenges of intuitionism. Gödel’s incompleteness theorems showed that these
goals cannot be achieved.
Gödel’s first incompleteness theorem showed that a version of Russell and
Whitehead’s Principia Mathematica is not complete. But the proof was actu-
ally very general and applies to a wide variety of theories. This means that it
wasn’t just that Principia Mathematica did not manage to completely capture
mathematics, but that no acceptable theory does. It took a while to isolate
the features of theories that suffice for the incompleteness theorems to apply,

Release: (None) ((None)) 425


CHAPTER 30. INTRODUCTION TO INCOMPLETENESS

and to generalize Gödel’s proof to apply make it depend only on these fea-
tures. But we are now in a position to state a very general version of the first
incompleteness theorem for theories in the language L A of arithmetic.

Theorem 30.14. If Γ is a consistent and axiomatizable theory in L A which represents


all computable functions and decidable relations, then Γ is not complete.

To say that Γ is not complete is to say that for at least one sentence ϕ,
Γ 0 ϕ and Γ 0 ¬ ϕ. Such a sentence is called independent (of Γ ). We can in
fact relatively quickly prove that there must be independent sentences. But
the power of Gödel’s proof of the theorem lies in the fact that it exhibits a
specific example of such an independent sentence. The intriguing construction
produces a sentence GΓ , called a Gödel sentence for Γ, which is unprovable
because in Γ, GΓ is equivalent to the claim that GΓ is unprovable in Γ. It does
so constructively, i.e., given an axiomatization of Γ and a description of the
proof system, the proof gives a method for actually writing down GΓ .
The construction in Gödel’s proof requires that we find a way to express
in L A the properties of and operations on terms and formulas of L A itself.
These include properties such as “ϕ is a sentence,” “δ is a derivation of ϕ,”
and operations such as ϕ[t/x ]. This way must (a) express these properties
and relations via a “coding” of symbols and sequences thereof (which is what
terms, formulas, derivations, etc. are) as natural numbers (which is what L A
can talk about). It must (b) do this in such a way that Γ will prove the relevant
facts, so we must show that these properties are coded by decidable properties
of natural numbers and the operations correspond to computable functions on
natural numbers. This is called “arithmetization of syntax.”
Before we investigate how syntax can be arithmetized, however, we will
consider the condition that Γ is “strong enough,” i.e., represents all com-
putable functions and decidable relations. This requires that we give a precise
definition of “computable.” This can be done in a number of ways, e.g., via
the model of Turing machines, or as those functions computable by programs
in some general-purpose programming language. Since our aim is to repre-
sent these functions and relations in a theory in the language L A , however, it
is best to pick a simple definition of computability of just numerical functions.
This is the notion of recursive function. So we will first discuss the recursive
functions. We will then show that Q already represents all recursive functions
and relations. This will allow us to apply the incompleteness theorem to spe-
cific theories such as Q and PA, since we will have established that these are
examples of theories that are “strong enough.”
The end result of the arithmetization of syntax is a formula Prov Γ ( x ) which,
via the coding of formulas as numbers, expresses provability from the axioms
of Γ. Specifically, if ϕ is coded by the number n, and Γ ` ϕ, then Γ ` ProvΓ (n).
This “provability predicate” for Γ allows us also to express, in a certain sense,
the consistency of Γ as a sentence of L A : let the “consistency statemetn” for Γ

426 Release: (None) ((None))


30.4. UNDECIDABILITY AND INCOMPLETENESS

be the sentence ¬ProvΓ (n), where we take n to be the code of a contradiction,


e.g., of ⊥. The second incompleteness theorem states that consistent axioma-
tizable theories also do not prove their own consistency statements. The con-
ditions required for this theorem to apply are a bit more stringent than just
that the theory represents all computable functions and decidable relations,
but we will show that PA satisifes them.

30.4 Undecidability and Incompleteness


Gödel’s proof of the incompleteness theorems require arithmetization of syn-
tax. But even without that we can obtain some nice results just on the assum-
tion that a theory represents all decidable relations. The proof is a diagonal
argument similar to the proof of the undecidability of the halting problem.
Theorem 30.15. If Γ is a consistent theory that represents every decidable relation,
then Γ is not decidable.

Proof. Suppose Γ were decidable. We show that if Γ represents every decid-


able relation, it must be inconsistent.
Decidable properties (one-place relations) are represented by formulas with
one free variable. Let ϕ0 ( x ), ϕ1 ( x ), . . . , be a computable enumeration of all
such formulas. Now consider the following set D ⊆ N:
D = {n : Γ ` ¬ ϕn (n)}
The set D is decidable, since we can test if n ∈ D by first computing ϕn ( x ), and
from this ¬ ϕn (n). Obviously, substituting the term n for every free occurrence
of x in ϕn ( x ) and prefixing ϕ(n) by ¬ is a mechanical matter. By assumption,
Γ is decidable, so we can test if ¬ ϕ(n) ∈ Γ. If it is, n ∈ D, and if it isn’t, n ∈
/ D.
So D is likewise decidable.
Since Γ represents all decidable properties, it represents D. And the for-
mulas which represent D in Γ are all among ϕ0 ( x ), ϕ1 ( x ), . . . . So let d be a
number such that ϕd ( x ) represents D in Γ. If d ∈ / D, then, since ϕd ( x ) repre-
sents D, Γ ` ¬ ϕd (d). But that means that d meets the defining condition of D,
and so d ∈ D. This contradicts d ∈ / D. So by indirect proof, d ∈ D.
Since d ∈ D, by the definition of D, Γ ` ¬ ϕd (d). On the other hand, since
ϕd ( x ) represents D in Γ, Γ ` ϕd (d). Hence, Γ is inconsistent.

The preceding theorem shows that no theory that represents all decidable
relations can be decidable. We will show that Q does represent all decidable
relations; this means that all theories that include Q, such as PA and TA, also
do, and hence also are not decidable.
We can also use this result to obtain a weak version of the first incomplete-
ness theorem. Any theory that is axiomatizable and complete is decidable.
Consistent theories that are axiomatizable and represent all decidable proper-
ties then cannot be complete.

Release: (None) ((None)) 427


CHAPTER 30. INTRODUCTION TO INCOMPLETENESS

Theorem 30.16. If Γ is axiomatizable and complete it is decidable.

Proof. Any inconsistent theory is decidable, since inconsistent theories contain


all sentences, so the answer to the question “is ϕ ∈ Γ” is always “yes,” i.e.,
can be decided.
So suppose Γ is consistent, and furthermore is axiomatizable, and com-
plete. Since Γ is axiomatizable, it is computably enumerable. For we can
enumerate all the correct derivations from the axioms of Γ by a computable
function. From a correct derivation we can compute the sentence it derives,
and so together there is a computable function that enumerates all theorems
of Γ. A sentence is a theorem of Γ iff ¬ ϕ is not a theorem, since Γ is consis-
tent and complete. We can therefore decide if ϕ ∈ Γ as follows. Enumerate all
theorems of Γ. When ϕ appears on this list, we know that Γ ` ϕ. When ¬ ϕ ap-
pears on this list, we know that Γ 0 ϕ. Since Γ is complete, one of these cases
eventually obtains, so the procedure eventually produces and answer.

Corollary 30.17. If Γ is consistent, axiomatizable, and represents every decidable


property, it is not complete.

Proof. If Γ were complete, it would be decidable by the previous theorem


(since it is axiomatizable and consistent). But since Γ represents every de-
cidable property, it is not decidable, by the first theorem.

Once we have established that, e.g., Q, represents all decidable properties,


the corollary tells us that Q must be incomplete. However, its proof does not
provide an example of an independent sentence; it merely shows that such
a sentence must exist. For this, we have to arithmetize syntax and follow
Gödel’s original proof idea. And of course, we still have to show the first
claim, namely that Q does, in fact, represent all decidable properties.

Problems
Problem 30.1. Show that TA = { ϕ : N  ϕ} is not axiomatizable. You may
assume that TA represents all decidable properties.

428 Release: (None) ((None))


Chapter 31

Arithmetization of Syntax

Note that arithmetization for signed tableaux is not yet available.

31.1 Introduction
In order to connect computability and logic, we need a way to talk about the
objects of logic (symbols, terms, formulas, derivations), operations on them,
and their properties and relations, in a way amenable to computational treat-
ment. We can do this directly, by considering computable functions and re-
lations on symbols, sequences of symbols, and other objects built from them.
Since the objects of logical syntax are all finite and built from an enumerable
sets of symbols, this is possible for some models of computation. But other
models of computation—such as the recursive functions—-are restricted to
numbers, their relations and functions. Moreover, ultimately we also want
to be able to deal with syntax within certain theories, specifically, in theo-
ries formulated in the language of arithmetic. In these cases it is necessary to
arithmetize syntax, i.e., to represent syntactic objects, operations on them, and
their relations, as numbers, arithmetical functions, and arithmetical relations,
respectively. The idea, which goes back to Leibniz, is to assign numbers to
syntactic objects.
It is relatively straightforward to assign numbers to symbols as their “codes.”
Some symbols pose a bit of a challenge, since, e.g., there are infinitely many
variables, and even infinitely many function symbols of each arity n. But of
course it’s possible to assign numbers to symbols systematically in such a way
that, say, v2 and v3 are assigned different codes. Sequences of symbols (such
as terms and formulas) are a bigger challenge. But if can deal with sequences
of numbers purely arithmetically (e.g., by the powers-of-primes coding of se-
quences), we can extend the coding of individual symbols to coding of se-
quences of symbols, and then further to sequences or other arrangements of

429
CHAPTER 31. ARITHMETIZATION OF SYNTAX

formulas, such as derivations. This extended coding is called “Gödel number-


ing.” Every term, formula, and derivation is assigned a Gödel number.
By coding sequences of symbols as sequences of their codes, and by chos-
ing a system of coding sequences that can be dealt with using computable
functions, we can then also deal with Gödel numbers usign computable func-
tions. In practice, all the relevant functions will be primitive recursive. For
instance, computing the length of a sequence and computing the i-th element
of a sequence from the code of the sequence are both primitive recursive. If
the number coding the sequence is, e.g., the Gödel number of a formula ϕ,
we immediately see that the length of a formula and the (code of the) i-th
symbol in a formula can also be computed from the Gödel number of ϕ. It
is a bit harder to prove that, e.g., the property of being the Gödel number of
a correctly formed term, of being the Gödel number of a corret derivation is
primitive recursive. It is nevertheless possible, because the sequences of inter-
est (terms, formulas, derivations) are inductively defined.
As an example, consider the operation of substitution. If ϕ is a formula,
x a variable, and t a term, then ϕ[t/x ] is the result of replacing every free
occurrence of x in ϕ by t. Now suppose we have assigned Gödel numbers to ϕ,
x, t—say, k, l, and m, respectively. The same scheme assignes a Gödel number
to [t/x ], say, n. This mapping—of k, l, m to n—is the arithmetical analog of
the substitution operation. When the substitution operation maps ϕ, x, t to
ϕ[t/x ], the arithmetized substitution functions maps the Gödel numbers k, l,
m to the Gödel number n. We will see that this function is primitive recursive.
Arithmetization of syntax is not just of abstract interest, although it was
originally a non-trivial insight that languages like the language of arithmetic,
which do not come with mechanisms for “talking about” languages can, after
all, formalize complex properties of expressions. It is then just a small step to
ask what a theory in this language, such as Peano arithmetic, can prove about
its own language (including, e.g., whether sentences are provable or true).
This leads us to the famous limitative theorems of Gödel (about unprovabil-
ity) and Tarski (the undefinability of truth). But the trick of arithmetizing syn-
tax is also important in order to prove some important results in computability
theory, e.g., about the computational prower of theories or the relationship be-
tween different models of computability. The arithmetization of syntax serves
as a model for arithmetizing other objects and properties. For instance, it is
similarly possible to arithmetize configurations and computations (say, of Tur-
ing machines). This makes it possible to simulate computations in one model
(e.g., Turing machines) in another (e.g., recursive functions).

31.2 Coding Symbols


The basic language L of first order logic makes use of the symbols

⊥ ¬ ∨ ∧ → ∀ ∃ = ( ) ,

430 Release: (None) ((None))


31.2. CODING SYMBOLS

together with enumerable sets of variables and constant symbols, and enu-
merable sets of function symbols and predicate symbols of arbitrary arity. We
can assign codes to each of these symbols in such a way that every symbol is
assigned a unique number as its code, and no two different symbols are as-
signed the same number. We know that this is possible since the set of all
symbols is enumerable and so there is a bijection between it and the set of nat-
ural numbers. But we want to make sure that we can recover the symbol (as
well as some information about it, e.g., the arity of a function symbol) from
its code in a computable way. There are many possible ways of doing this,
of course. Here is one such way, which uses primitive recursive functions.
(Recall that hn0 , . . . , nk i is the number coding the sequence of numbers n0 , . . . ,
nk .)

Definition 31.1. If s is a symbol of L, let the symbol code cs be defined as fol-


lows:

1. If s is among the logical symbols, cs is given by the following table:

⊥ ¬ ∨ ∧ → ∀
h0, 0i h0, 1i h0, 2i h0, 3i h0, 4i h0, 5i
∃ = ( ) ,
h0, 6i h0, 7i h0, 8i h0, 9i h0, 10i

2. If s is the i-th variable vi , then cs = h1, i i.

3. If s is the i-th constant symbol cin , then cs = h2, i i.

4. If s is the i-th n-ary function symbol fin , then cs = h3, n, i i.

5. If s is the i-th n-ary predicate symbol Pin , then cs = h4, n, i i.

Proposition 31.2. The following relations are primitive recursive:

1. Fn( x, n) iff x is the code of fin for some i, i.e., x is the code of an n-ary function
symbol.

2. Pred( x, n) iff x is the code of Pin for some i or x is the code of = and n = 2,
i.e., x is the code of an n-ary predicate symbol.

Definition 31.3. If s0 , . . . , sn−1 is a sequence of symbols, its Gödel number is


h c s 0 , . . . , c s n −1 i .

Note that codes and Gödel numbers are different things. For instance, the
variable v5 has a code cv5 = h1, 5i = 22 · 36 . But the variable v5 considered as
a term is also a sequence of symbols (of length 1). The Gödel number # v5 # of the
2 6
term v5 is hcv5 i = 2cv5 +1 = 22 ·3 +1 .

Release: (None) ((None)) 431


CHAPTER 31. ARITHMETIZATION OF SYNTAX

Example 31.4. Recall that if k0 , . . . , k n−1 is a sequence of numbers, then the


code of the sequence hk0 , . . . , k n−1 i in the power-of-primes coding is
k
2k0 +1 · 3k1 +1 · · · · · pnn−−11 ,

where pi is the i-th prime (starting with p0 = 2). So for instance, the formula
v0 = , or, more explicitly, =(v0 , c0 ), has the Gödel number

hc= , c( , cv0 , c, , cc0 , c) i.

Here, c= is h0, 7i = 20+1 · 37=1 , cv0 is h1, 0i = 21+1 · 30+1 , etc. So # = (v0 , c0 )# is

2c= +1 · 3c( +1 · 5cv0 +1 · 7c, +1 · 11cc0 +1 · 13c) +1 =


1 · 38 + 1 1 · 39 + 1 2 · 31 + 1 1 ·311 +1 3 · 31 + 1 1 ·310 +1
22 · 32 · 52 · 72 · 112 · 132 =
213 123 · 339 367 · 513 · 7354 295 · 1125 · 13118 099 .

31.3 Coding Terms


A term is simply a certain kind of sequence of symbols: it is built up induc-
tively from constants and variables according to the formation rules for terms.
Since sequences of symbols can be coded as numbers—using a coding scheme
for the symbols plus a way to code sequences of numbers—assigning Gödel
numbers to terms is not difficult. The challenge is rather to show that the
property a number has if it is the Gödel number of a correctly formed term is
computable, or in fact primitive recursive.

Proposition 31.5. The relations Term( x ) and ClTerm( x ) which hold iff x is the
Gödel number of a term or a closed term, respectively, are primitive recursive.

Proof. A sequence of symbols s is a term iff there is a sequence s0 , . . . , sk−1 = s


of terms which records how the term s was formed from constant symbols
and variables according to the formation rules for terms. To express that such
a putative formation sequence follows the formation rules it has to be the case
that, for each i < k, either

1. si is a variable v j , or

2. si is a constant symbol c j , or

3. si is built from n terms t1 , . . . , tn occurring prior to place i using an n-


place function symbol f jn .

To show that the corresponding relation on Gödel numbers is primitive re-


cursive, we have to express this condition primitive recursively, i.e., using
primitive recursive functions, relations, and bounded quantification.

432 Release: (None) ((None))


31.3. CODING TERMS

Suppose y is the number that codes the sequence s0 , . . . , sk−1 , i.e., y =


h# s0 # , . . . , # sk # i. It codes a formation sequence for the term with Gödel num-
ber x iff for all i < k:
1. there is a j such that (y)i = # v j # , or
2. there is a j such that (y)i = # c j # , or
3. there is an n and a number z = hz1 , . . . , zn i such that each zl is equal to
some (y)i0 for i0 < i and
(y)i = # f jn (# _ flatten(z) _ # )# ,

and moreover (y)k−1 = x. The function flatten(z) turns the sequence h# t1 # , . . . , # tn # i


into # t1 , . . . , tn # and is primitive recursive.
The indices j, n, the Gödel numbers zl of the terms tl , and the code z of the
sequence hz1 , . . . , zn i, in (3) are all less than y. We can replace k above with
len(y). Hence we can express “y is the code of a formation sequence of the
term with Gödel number x” in a way that shows that this relation is primitive
recursive.
We now just have to convince ourselves that there is a primitive recursive
bound on y. But if x is the Gödel number of a term, it must have a forma-
tion sequence with at most len( x ) terms (since every term in the formation
sequence of s must start at some place in s, and no two subterms can start at
the same place). The Gödel number of each subterm of s is of course ≤ x.
Hence, there always is a formation sequence with code ≤ xlen( x) .
For ClTerm, simply leave out the clause for variables.

Alternative proof of ??. The inductive definition says that constant symbols and
variables are terms, and if t1 , . . . , tn are terms, then so is f jn (t1 , . . . , tn ), for any
n and j. So terms are formed in stages: constant symbols and variables at
stage 0, terms involving one function symbol at stage 1, those involving at
least two nested function symbols at stage 2, etc. Let’s say that a sequence of
symbols s is a term of level l iff s can be formed by applying the inductive
definition of terms l (or fewer) times, i.e., it “becomes” a term by stage l or
before. So s is a term of level l + 1 iff
1. s is a variable v j , or
2. s is a constant symbol c j , or
3. s is built from n terms t1 , . . . , tn of level l and an n-place function sym-
bol f jn .

To show that the corresponding relation on Gödel numbers is primitive re-


cursive, we have to express this condition primitive recursively, i.e., using
primitive recursive functions, relations, and bounded quantification.
The number x is the Gödel number of a term s of level l + 1 iff

Release: (None) ((None)) 433


CHAPTER 31. ARITHMETIZATION OF SYNTAX

1. there is a j such that x = # v j # , or

2. there is a j such that x = # c j # , or

3. there is an n, a j, and a number z = hz1 , . . . , zn i such that each zi is the


Gödel number of a term of level l and

x = # f jn (# _ flatten(z) _ # )# ,

and moreover (y)k−1 = x.


The indices j, n, the Gödel numbers zi of the terms ti , and the code z of the
sequence hz1 , . . . , zn i, in (3) are all less than x. So we get a primitive recursive
definition by:

lTerm( x, 0) = Var( x ) ∨ Const( x )


lTerm( x, l + 1) = Var( x ) ∨ Const( x ) ∨
(∃z < x ) ((∀i < len(z)) lTerm((z)i , l ) ∧
len(z)
(∃ j < x ) x = (# f j (# _ flatten(z) _ # )# ))

We can now define Term( x ) by lTerm( x, x ), since the level of a term is always
less than the Gödel number of the term.

Proposition 31.6. The function num(n) = # n# is primitive recursive.

Proof. We define num(n) by primitive recursion:

num(0) = # #
num(n + 1) = # 0(# _ num(n) _ # )# .

31.4 Coding Formulas


Proposition 31.7. The relation Atom( x ) which holds iff x is the Gödel number of
an atomic formula, is primitive recursive.

Proof. The number x is the Gödel number of an atomic formula iff one of the
following holds:

1. There are n, j < x, and z < x such that for each i < n, Term((z)i ) and
x=
# n #
P j ( _ flatten(z) _ # )# .

2. There are z1 , z2 < x such that Term(z1 ), Term(z2 ), and x =


#
=(# _ z1 _ # ,# _ z2 _ # )# .

434 Release: (None) ((None))


31.5. SUBSTITUTION

3. x = # ⊥# .

Proposition 31.8. The relation Frm( x ) which holds iff x is the Gödel number of
a formula is primitive recursive.

Proof. A sequence of symbols s is a formula iff there is formation sequence s0 ,


. . . , sk−1 = s of formula which records how s was formed from atomic formu-
las according to the formation rules. The code for each si (and indeed of the
code of the sequence hs0 , . . . , sk−1 i is less than the code x of s.

Proposition 31.9. The relation FreeOcc( x, z, i ), which holds iff the i-th symbol of the
formula with Gödel number x is a free occurrence of the variable with Gödel number z,
is primitive recursive.

Proof. Exercise.

Proposition 31.10. The property Sent( x ) which holds iff x is the Gödel number of a
sentence is primitive recursive.

Proof. A sentence is a formula without free occurrences of variables. So Sent( x )


holds iff

(∀i < len( x )) (∀z < x ) ((∃ j < z) z = # v j # → ¬FreeOcc( x, z, i )).

31.5 Substitution
Proposition 31.11. There is a primitive recursive function Subst( x, y, z) with the
property that
Subst(# ϕ# , # t# , # u# ) = # ϕ[t/u]#

Proof. We can then define a function hSubst by primitive recursion as follows:

hSubst( x, y, z, 0) = Λ
hSubst( x, y, z, i + 1) =
(
hSubst( x, y, z, i ) _ y if FreeOcc( x, z, i + 1)
append(hSubst( x, y, z, i ), ( x )i+1 ) otherwise.

Subst( x, y, z) can now be defined as hSubst( x, y, z, len( x )).

Proposition 31.12. The relation FreeFor( x, y, z), which holds iff the term with
Gödel number y is free for the variable with Gödel number z in the formula with
Gödel number x, is primitive recursive.

Proof. Exercise.

Release: (None) ((None)) 435


CHAPTER 31. ARITHMETIZATION OF SYNTAX

31.6 Derivations in LK
In order to arithmetize derivations, we must represent derivations as num-
bers. Since derivations are trees of sequents where each inference carries also
a label, a recursive representation is the most obvious approach: we represent
a derivation as a tuple, the components of which are the end-sequent, the la-
bel, and the representations of the sub-derivations leading to the premises of
the last inference.

Definition 31.13. If Γ is a finite sequence of sentences, Γ = h ϕ1 , . . . , ϕn i, then


# #
Γ = h# ϕ1 # , . . . , # ϕ n # i.
If Γ ⇒ ∆ is a sequent, then a Gödel number of Γ ⇒ ∆ is
#
Γ ⇒ ∆# = h# Γ# , # ∆# i

If π is a derivation in LK, then # π # is

1. h0, # Γ ⇒ ∆# i if π consists only of the initial sequent Γ ⇒ ∆.

2. h1, # Γ ⇒ ∆# , k, # π 0 # i if π ends in an inference with one premise, k is given


by the following table according to which rule was used in the last infer-
ence, and π 0 is the immediate subproof ending in the premise of the last
inference.
Rule: WL WR CL CR XL XR
k: 1 2 3 4 5 6

Rule: ¬L ¬R ∧L ∨R →R
k: 7 8 9 10 11

Rule: ∀L ∀R ∃L ∃R =
k: 12 13 14 15 16

3. h2, # Γ ⇒ ∆# , k, # π 0 # , # π 00 # i if π ends in an inference with two premises, k


is given by the following table according to which rule was used in the
last inference, and π 0 , π 00 are the immediate subproof ending in the left
and right premise of the last inference, respectively.
Rule: Cut ∧R ∨L →L
k: 1 2 3 4

Having settled on a representation of derivations, we must also show that


we can manipulate such derivations primitive recursively, and express their
essential properties and relations so. Some operations are simple: e.g., given
a Gödel number d of a derivation, (s)1 gives us the Gödel number of its end-
sequent. Some are much harder. We’ll at least sketch how to do this. The
goal is to show that the relation “π is a derivation of ϕ from Γ” is a primitive
recursive relation of the Gödel numbers of π and ϕ.

436 Release: (None) ((None))


31.6. DERIVATIONS IN LK

Proposition 31.14. The following relations are primitive recursive:

1. Γ ⇒ ∆ is an initial sequent.

2. Γ ⇒ ∆ follows from Γ 0 ⇒ ∆0 (and Γ 00 ⇒ ∆00 ) by a rule of LK.

3. π is a correct LK-derivation.

Proof. We have to show that the corresponding relations between Gödel num-
bers of formulas, sequences of Gödel numbers of formulas (which code se-
quences of formulas), and Gödel numbers of sequents, are primitive recur-
sive.

1. Γ ⇒ ∆ is an initial sequent if either there is a sentence ϕ such that Γ ⇒ ∆


is ϕ ⇒ ϕ, or there is a term t such that Γ ⇒ ∆ is ∅ ⇒ t = t. In terms of
Gödel numbers, InitSeq(s) holds iff

(∃ x < s) (Sent( x ) ∧ s = hh x i, h x ii) ∨


(∃t < s) (Term(t) ∧ s = h0, h# =(# _ t _ # ,# _ t _ # )# ii).

2. Here we have to show that for each rule of inference R the relation
FollowsByR (s, s0 ) which holds if s and s0 are the Gödel numbers of con-
clusion and premise of a correct application of R is primitive recursive.
If R has two premises, FollowsByR of course has three arguments.
For instance, Γ ⇒ ∆ follows correctly from Γ 0 ⇒ ∆0 by ∃R iff Γ = Γ 0
and there is a sequence of formulas ∆00 , a formula ϕ, a variable x and a
closed term t such that ∆0 = ∆00 , ϕ[t/x ] and ∆ = ∆00 , ∃ x ϕ. We just have
to translate this into Gödel numbers. If s = # Γ ⇒ ∆# then (s)0 = # Γ# and
(s)1 = # ∆# . So, FollowsBy∃R (s, s0 ) holds iff

( s )0 = ( s 0 )0 ∧
(∃d < s) (∃ f < s) (∃ x < s) (∃t < s0 ) (Frm( f ) ∧ Var(y) ∧ Term(t) ∧
(s0 )1 = d _ hSubst( f , t, x )i ∧
(s)1 = d _ h#(∃) _ y _ f i

The individual lines express, respectively, “Γ = Γ,” “there is a sequence (∆00 )


with Gödel number d, a formula (ϕ) with Gödel number f , a variable
with Gödel number x, and a term with Gödel number t,” “∆0 = ∆00 , ϕ[t/x ],”
and “∆ = ∆00 , ∃ x ϕ”. (Remember that # ∆# is the number of a sequence of
Gödel numbers of formulas in ∆.)

3. We first define a helper relation hDeriv(s, n) which holds if s codes a cor-


rect derivation to at least n inferences up from the end sequent. If n = 0
we let the relation be satisfied by default. Otherwise, hDeriv(s, n + 1) iff

Release: (None) ((None)) 437


CHAPTER 31. ARITHMETIZATION OF SYNTAX

either s consists just of an initial sequent, or it ends in a correct inference


and the codes of the immediate subderivations satisfy hDeriv(s, n).

hDeriv(s, 0) ⇔ true
hDeriv(s, n + 1) ⇔
((s)0 = 0 ∧ InitialSeq((s)1 )) ∨
((s)0 = 1 ∧
((s)2 = 1 ∧ FollowsByCL ((s)1 , ((s)3 )1 )) ∨
..
.
((s)2 = 16 ∧ FollowsBy= ((s)1 , ((s)3 )1 )) ∧
hDeriv((s)3 , n)) ∨
((s)0 = 2 ∧
((s)2 = 1 ∧ FollowsByCut ((s)1 , ((s)3 )1 ), ((s)4 )1 )) ∨
..
.
((s)2 = 4 ∧ FollowsBy→L ((s)1 , ((s)3 )1 ), ((s)4 )1 )) ∧
hDeriv((s)3 , n) ∧ hDeriv((s)4 , n))

This is a primitive recursive definition. If the number n is large enough,


e.g., larger than the maximum number of inferences between an initial
sequent and the end sequent in s, it holds of s iff s is the Gödel number
of a correct derivation. The number s itself is larger than that maximum
number of inferences. So we can now define Deriv(s) by hDeriv(s, s).

Proposition 31.15. Suppose Γ is a primitive recursive set of sentences. Then the


relation PrfΓ ( x, y) expressing “x is the code of a derivation π of Γ0 ⇒ ϕ for some
finite Γ0 ⊆ Γ and x is the Gödel number of ϕ” is primitive recursive.

Proof. Suppose “y ∈ Γ” is given by the primitive recursive predicate R Γ (y).


We have to show that PrfΓ ( x, y) which holds iff y is the Gödel number of a
sentence ϕ and x is the code of an LK-derivation with end sequent Γ0 ⇒ ϕ is
primitive recursive.
By the previous proposition, the property Deriv( x ) which holds iff x is the
code of a correct derivation π in LK is primitive recursive. If x is such a code,
then ( x )1 is the code of the end sequent of π, and so (( x )1 )0 is the code of the
left side of the end sequent and (( x )1 )1 the right side. So we can express “the
right side of the end sequent of π is ϕ” as len((( x )1 )1 ) = 1 ∧ ((( x )1 )1 )0 = x.
The left side of the end sequent of π is of course automatically finite, we just
have to express that every sentence in it is in Γ. Thus we can define PrfΓ ( x, y)

438 Release: (None) ((None))


31.7. DERIVATIONS IN NATURAL DEDUCTION

by

PrfΓ ( x, y) ⇔ Sent(y) ∧ Deriv( x ) ∧


(∀i < len((( x )1 )0 )) R Γ (((( x )1 )0 )i ) ∧
len((( x )1 )1 ) = 1 ∧ ((( x )1 )1 )0 = x

31.7 Derivations in Natural Deduction


In order to arithmetize derivations, we must represent derivations as num-
bers. Since derivations are trees of formulas where each inference carries
one or two labels, a recursive representation is the most obvious approach:
we represent a derivation as a tuple, the components of which are the end-
formula, the labels, and the representations of the sub-derivations leading to
the premises of the last inference.
Definition 31.16. If δ is a derivation in natural deduction, then # δ# is
1. h0, # ϕ# , ni if δ consists only of the assumption ϕ. The number n is 0 if it
is an undischarged assumption, and the numerical label otherwise.

2. h1, # ϕ# , n, k, # δ1 # i if δ ends in an inference with one premise, k is given by


the following table according to which rule was used in the last infer-
ence, and δ1 is the immediate subproof ending in the premise of the last
inference. n is the label of the inference, or 0 if the inference does not
discharge any assumptions.
Rule: ∧Elim ∨Intro →Intro ¬Intro ⊥I
k: 1 2 3 4 5

Rule: ⊥C ∀Intro ∀Elim ∃Intro =Intro


k: 6 7 8 9 10

3. h2, # ϕ# , n, k, # δ1 # , # δ2 # i if δ ends in an inference with two premises, k is


given by the following table according to which rule was used in the
last inference, and δ1 , δ2 are the immediate subderivations ending in the
left and right premise of the last inference, respectively. n is the label of
the inference, or 0 if the inference does not discharge any assumptions.
Rule: ∧Intro →Elim ¬Elim
k: 1 2 3

4. h3, # ϕ# , n, # δ1 # , # δ2 # , # δ3 # i if δ ends in an ∨Elim inference. δ1 , δ2 , δ3 are the


immediate subderivations ending in the left, middle, and right premise
of the last inference, respectively, and n is the label of the inference.
Example 31.17. Consider the very simple derivation

Release: (None) ((None)) 439


CHAPTER 31. ARITHMETIZATION OF SYNTAX

[( ϕ ∧ ψ)]1
ϕ ∧Elim
1 →Intro
( ϕ → ψ)

The Gödel number of the assumption would be d0 = h0, # ( ϕ ∧ ψ)# , 1i. The
Gödel number of the derivation ending in the conclusion of ∧Elim would be
d1 = h1, # ϕ# , 0, 1, d0 i (1 since ∧Elim has one premise, Gödel number of conclu-
sion ϕ, 0 because no assumption is discharged, 1 is the number coding ∧Elim).
The Gödel number of the entire derivation then is h1, # ( ϕ → ψ)# , 1, 3, d1 i, i.e.,
# #
# ( ϕ → ψ )# +1 2 ·3# ϕ# +1 ·51 ·72 ·11(21 ·3 ( ϕ∧ψ) +1 ·52 ) )
22 · 3 · 52 · 74 · 11(2 .

Having settled on a representation of derivations, we must also show that


we can manipulate such derivations primitive recursively, and express their
essential properties and relations so. Some operations are simple: e.g., given
a Gödel number d of a derivation, (d)1 gives us the Gödel number of its end-
formula. Some are much harder. We’ll at least sketch how to do this. The
goal is to show that the relation “δ is a derivation of ϕ from Γ” is primitive
recursive on the Gödel numbers of δ and ϕ.

Proposition 31.18. The following relations are primitive recursive:

1. ϕ occurs as an assumption in δ with label n.

2. All assumption in δ with label n are of the form ϕ (i.e., we can discharge the
assumption ϕ using label n in δ).

3. ϕ is an undischarged assumption of δ.

4. An inference with conclusion ϕ, upper derivations δ1 (and δ2 , δ3 ), and dis-


charge label n is correct.

5. δ is a correct natural deduction derivation.

Proof. We have to show that the corresponding relations between Gödel num-
bers of formulas, sequences of Gödel numbers of formulas (which code sets
of formulas), and Gödel numbers of derivations are primitive recursive.

1. We want to show that Assum( x, d, n), which holds if x is the Gödel num-
ber of an assumption of the derivation with Gödel number d labelled n,
is primitive recursive. For this we need a helper relation hAssum( x, d, n, i )
which holds if the formula ϕ with Gödel number x occurs as an initial
formula with label n in the derivation with Gödel number d within i

440 Release: (None) ((None))


31.7. DERIVATIONS IN NATURAL DEDUCTION

inferences up from the end-formula.

hAssum( x, d, n, 0) ⇔ T
hAssum( x, d, n, i + 1) ⇔
Sent( x ) ∧ (d = h0, x, ni ∨
((d)0 = 1 ∧ hAssum( x, (d)4 , n, i )) ∨
((d)0 = 2 ∧ (hAssum( x, (d)4 , n, i ) ∨
hAssum( x, (d)5 , n, i ))) ∨
((d)0 = 3 ∧ (hAssum( x, (d)3 , n, i ) ∨
hAssum( x, (d)2 , n, i )) ∨ hAssum( x, (d)3 , n, i ))

If the number i is large enough, e.g., larger than the maximum num-
ber of inferences between an initial formula and the end-formula of δ,
it holds of x, d, n, and i iff ϕ is an initial formula in δ labelled n. The
number d itself is larger than that maximum number of inferences. So
we can define

Assum( x, d, n) = hAssum( x, d, n, d).

2. We want to show that Discharge( x, d, n), which holds if all assumptions


with label n in the derivation with Gödel number d all are the formula
with Gödel number x. But this relation holds iff (∀y < d) (Assum(y, d, n) →
y = x ).
3. An occurrence of an assumption is not open if it occurs with label n in
a subderivation that ends in a rule with discharge label n. Define the
helper relation hNotOpen( x, d, n, i ) as

hNotOpen( x, d, n, 0) ⇔ T
hNotOpen( x, d, n, i + 1) ⇔
( d )2 = n ∨
((d)0 = 1 ∧ hNotOpen( x, (d)4 , n, i )) ∨
((d)0 = 2 ∧ hNotOpen( x, (d)4 , n, i ) ∧
hNotOpen( x, (d)5 , n, i ))) ∨
((d)0 = 3 ∧ hNotOpen( x, (d)3 , n, i ) ∧
hNotOpen( x, (d)4 , n, i ) ∧ hNotOpen( x, (d)5 , n, i ))

Note that all assumptions of the form ϕ labelled n are discharged in δ iff
either the last inference of δ discharges them (i.e., the last inference has
label n), or if it is discharged in all of the immediate subderivations.
A formula ϕ is an open assumption of δ iff it is an initial formula of δ
(with label n) and is not discharged in δ (by a rule with label n). We can

Release: (None) ((None)) 441


CHAPTER 31. ARITHMETIZATION OF SYNTAX

then define OpenAssum( x, d) as

(∃n < d) (Assum( x, d, n, d) ∧ ¬hNotOpen( x, d, n, d)).

4. Here we have to show that for each rule of inference R the relation
FollowsByR ( x, d1 , n) which holds if x is the Gödel number of the conclu-
sion and d1 is the Gödel number of a derivation ending in the premise
of a correct application of R with label n is primitive recursive, and sim-
ilarly for rules with two or three premises.

The simplest case is that of the =Intro rule. Here there is no premise,
i.e., d1 = 0. However, ϕ must be of the form t = t, for a closed term t.
Here, a primitive recursive definition is

(∃t < x ) (ClTerm(t) ∧ x = (# =(# _ t _ # ,# _ t _ # )# )) ∧ d1 = 0).

For a more complicated example, FollowsBy→Intro ( x, d1 , n) holds iff ϕ is


of the form (ψ → χ), the end-formula of δ is χ, and any initial formula
in δ labelled n is of the form ψ. We can express this primitive recursively
by

(∃y < x ) (Sent(y) ∧ Discharge(y, d1 ) ∧


(∃z < x ) (Sent(y) ∧ (d)1 = z) ∧
x = (# (# _ y _ # →# _ z _ # )# ))

(Think of y as the Gödel number of ψ and z as that of χ.)

For another example, consider ∃Intro. Here, ϕ is the conclusion of a


correct inference with one upper derivation iff there is a formula ψ, a
closed term t and a variable x such that ψ[t/x ] is the end-formula of
the upper derivation and ∃ x ψ is the conclusion ϕ, i.e., the formula with
Gödel number x. So FollowsBy∃Intro ( x, d1 , n) holds iff

Sent( x ) ∧ (∃y < x ) (∃v < x ) (∃t < d) (Frm(y) ∧ Term(t) ∧ Var(v) ∧
FreeFor(y, t, v) ∧ Subst(y, t, v) = (d1 )1 ∧ x = (# ∃# _ v _ z))

5. We first define a helper relation hDeriv(d, i ) which holds if d codes a cor-


rect derivation at least to i inferences up from the end sequent. hDeriv(d, 0)
holds always. Otherwise, hDeriv(d, i + 1) iff either d just codes an as-
sumption or d ends in a correct inference and the codes of the immediate

442 Release: (None) ((None))


31.7. DERIVATIONS IN NATURAL DEDUCTION

sub-derivations satisfy hDeriv(d0 , i ).

hDeriv(d, 0) ⇔ T
hDeriv(d, i + 1) ⇔
(∃ x < d) (∃n < d) (Sent( x ) ∧ d = h0, x, ni) ∨
((d)0 = 1 ∧
((d)3 = 1 ∧ FollowsBy∧Elim ((d)1 , (d)4 , (d)2 ) ∨
..
.
((d)3 = 10 ∧ FollowsBy=Intro ((d)1 , (d)4 , (d)2 )) ∧
nDeriv((d)4 , i )) ∨
((d)0 = 2 ∧
((d)3 = 1 ∧ FollowsBy∧Intro ((d)1 , (d)4 , (d)5 , (d)2 )) ∨
..
.
((d)3 = 3 ∧ FollowsBy¬Elim ((d)1 , (d)4 , (d)5 , (d)2 )) ∧
hDeriv((d)4 , i ) ∧ hDeriv((d)5 , i )) ∨
((d)0 = 3 ∧
FollowsBy∨Elim ((d)1 , (d)3 , (d)4 , (d)5 , (d)2 ) ∧
hDeriv((d)3 , i ) ∧ hDeriv((d)4 , i )) ∧ hDeriv((d)5 , i )

This is a primitive recursive definition. Again we can define Deriv(d) as


hDeriv(d, d).

Proposition 31.19. Suppose Γ is a primitive recursive set of sentences. Then the


relation PrfΓ ( x, y) expressing “x is the code of a derivation δ of ϕ from undischarged
assumptions in Γ and y is the Gödel number of ϕ” is primitive recursive.

Proof. Suppose “y ∈ Γ” is given by the primitive recursive predicate R Γ (y).


We have to show that PrfΓ ( x, y) which holds iff y is the Gödel number of
a sentence ϕ and x is the code of a natural deduction derivation with end
formula ϕ and all undischarged assumptions in Γ is primitive recursive.
By the previous proposition, the property Deriv( x ) which holds iff x is the
code of a correct derivation δ in natural deduction is primitive recursive. If
x is such a code, then ( x )1 is the code of the end-formula of δ. Thus we can
define PrfΓ ( x, y) by

PrfΓ ( x, y) ⇔ Deriv( x ) ∧ ( x )1 = y ∧
(∀z < x ) (OpenAssum(z, x ) → R Γ (z))

Release: (None) ((None)) 443


CHAPTER 31. ARITHMETIZATION OF SYNTAX

31.8 Axiomatic Derivations


In order to arithmetize axiomatic derivations, we must represent derivations
as numbers. Since derivations are simply sequences of formulas, the obvious
approach is to code every derivation as the code of the sequence of codes of
formulas in it.

Definition 31.20. If δ is an axiomatic derivation consisting of formulas ϕ1 ,


. . . , ϕn , then # δ# is
h# ϕ1 # , . . . , # ϕ n # i.

Example 31.21. Consider the very simple derivation

1. ψ → (ψ ∨ ϕ)
2. (ψ → (ψ ∨ ϕ)) → ( ϕ → (ψ → (ψ ∨ ϕ)))
3. ϕ → (ψ → (ψ ∨ ϕ))

The Gödel number of this derivation would simply be

h# ψ → (ψ ∨ ϕ)# , # (ψ → (ψ ∨ ϕ)) → ( ϕ → (ψ → (ψ ∨ ϕ)))# , # ϕ → (ψ → (ψ ∨ ϕ))# i.

Having settled on a representation of derivations, we must also show that


we can manipulate such derivations primitive recursively, and express their
essential properties and relations so. Some operations are simple: e.g., given
a Gödel number d of a derivation, (d)len(d)−1 gives us the Gödel number of its
end-formula. Some are much harder. We’ll at least sketch how to do this. The
goal is to show that the relation “δ is a derivation of ϕ from Γ” is primitive
recursive on the Gödel numbers of δ and ϕ.

Proposition 31.22. The following relations are primitive recursive:

1. ϕ is an axiom.

2. The ith line in δ is justified by modus ponens

3. The ith line in δ is justified by QR.

4. δ is a correct derivation.

Proof. We have to show that the corresponding relations between Gödel num-
bers of formulas and Gödel numbers of derivations are primitive recursive.

1. We have a given list of axiom schemas, and ϕ is an axiom if it is of the


form given by one of these schemas. Since the list of schemas is finite,
it suffices to show that we can test primitive recursively, for each axiom
schema, if ϕ is of that form. For instance, consider the axiom schema

ψ → ( χ → ψ ).

444 Release: (None) ((None))


31.8. AXIOMATIC DERIVATIONS

ϕ is an instance of this axiom schema if there are formulas ψ and χ such


that we obtain ϕ when we concatenate ( with ψ with → with ( with χ
with → with ψ and with )). We can test the corresponding property of
the Gödel number n of ϕ, since concatenation of sequences is primitive
recursive, and the Gödel numbers of ψ and C must be smaller than the
Gödel number of ϕ, since when the relation holds, both ψ and χ are
sub-formulas of ϕ. Hence, we can define

IsAxψ→(χ→ψ) (n) ⇔ (∃b < n) (∃c < n) (Sent(b) ∧ Sent(c) ∧


n = # (# _ b _ # →# _ # (# _ c _ # →# _ b _ # ))# ).

If we have such a definition for each axiom schema, their disjunction


defines the property IsAx(n), “n is the Gödel number of an axiom.”

2. The ith line in δ is justified by modus ponens iff there are lines j and
k < i where the sentence on line j is some formula ϕ, the sentence on
line k is ϕ → ψ, and the sentence on line i is ψ.

MP(d, i ) ⇔ (∃ j < i ) (∃k < i )


( d ) k = # (# _ ( d ) j _ # →# _ ( d ) i _ # )#

Since bounded quantification, concatenation, and = are primitive recur-


sive, this defines a primitive recursive relation.

3. A line in δ is justified by QR if it is of the form ψ → ∀ x ϕ( x ), a preceding


line is ψ → ϕ(c) for some constant symbol c, and c does on occur in ψ.
This is the case iff

a) there is a sentence ψ and


b) a formula ϕ( x ) with a single variable x free so that
c) line i contains ψ → ∀ x ϕ( x )
d) some line j < i contains ψ → ϕ[c/x ] for a constant c
e) which does not occur in ψ.

All of these can be tested primitive recursively, since the Gödel numbers
of ψ, ϕ( x ), and x are less than the Gödel number of the formula on line i,
and that of a less than the Gödel number of the formula on line j:

QR1 (d, i ) ⇔ (∃b < (d)i ) (∃ x < (d)i ) (∃ a < (d)i ) (∃c < (d) j ) (
Var( x ) ∧ Const(c) ∧
( d ) i = # (# _ b _ # →# _ # ∀# _ x _ a _ # )# ∧
(d) j = # (# _ b _ # →# _ Subst( a, c, x ) _ # )# ∧
Sent(b) ∧ Sent(Subst( a, c, x )) ∧ (∀k < len(b)) (b)k 6= (c)0 )

Release: (None) ((None)) 445


CHAPTER 31. ARITHMETIZATION OF SYNTAX

Here we assume that c and x are the Gödel numbers of the variable and
constant considered as terms (i.e., not their symbol codes). We test that x
is the only free variable of ϕ( x ) by testing if ϕ( x )[c/x ] is a sentence, and
ensure that c does not occur in ψ by requiring that every symbol of ψ is
different from c.
We leave the other version of QR as an exercise.

4. d is the Gödel number of a correct derivation iff every line in it is an


axiom, or justified by modus ponens or QR. Hence:

Deriv(d) ⇔ (∀i < len(d)) (IsAx((d)i ) ∨ MP(d, i ) ∨ QR(d, i ))

Proposition 31.23. Suppose Γ is a primitive recursive set of sentences. Then the


relation PrfΓ ( x, y) expressing “x is the code of a derivation δ of ϕ from Γ and y is the
Gödel number of ϕ” is primitive recursive.

Proof. Suppose “y ∈ Γ” is given by the primitive recursive predicate R Γ (y).


We have to show that the relation PrfΓ ( x, y) is primitive recursive, where
PrfΓ ( x, y) holds iff y is the Gödel number of a sentence ϕ and x is the code
of a derivation of ϕ from Γ.
By the previous proposition, the property Deriv( x ) which holds iff x is the
code of a correct derivation δ is primitive recursive. However, that definition
did not take into account the set Γ as an additional way to justify lines in the
derivation. Our primitive recursive test of whether a line is justified by QR also
left out of consideration the requirement that the constant c is not allowed to
occur in Γ. It is possible to amend our definition so that it takes into account
Γ directly, but it is easier to use Deriv and the deduction theorem. Γ ` ϕ iff
there is some finite list of sentences ψ1 , . . . , ψn ∈ Γ such that {ψ1 , . . . , ψn } ` ϕ.
And by the deduction theorem, this is the case if ` (ψ1 → (ψ2 → · · · (ψn →
ϕ) · · · )). Whether a sentence with Gödel number z is of this form can be
tested primitive recursively. So, instead of considering x as the Gödel number
of a derivation of the sentence with Gödel number y from Γ, we consider x as
the Gödel number of a derivation of a nested conditional of the above form
from ∅.
First, if we have a sequence of sentences, we can primitive recursively form
the conditional with all these sentences as antecedents and given sentence as
consequent:

hCond(s, y, 0) = y
hCond(s, y, n + 1) = # (# _ (s)n _ # →# _ Cond(s, y, n) _ # )#
Cond(s, y) = hCond(s, y, len(s))

446 Release: (None) ((None))


31.8. AXIOMATIC DERIVATIONS

So we can define PrfΓ ( x, y) by

PrfΓ ( x, y) ⇔ (∃s < sequenceBound( x, x )) (


( x )len(x)−1 = Cond(s, y) ∧
(∀i < len(s)) (s)i ∈ Γ ∧
Deriv( x )).

The bound on s is given by considering that each (s)i is the Gödel number of
a subformula of the last line of the derivation, i.e., is less than ( x )len( x)−1 . The
number of antecedents ψ ∈ Γ, i.e., the length of s, is less than the length of the
last line of x.

Problems
Problem 31.1. Show that the function flatten(z), which turns the sequence
h# t1 # , . . . , # tn # i into # t1 , . . . , tn # , is primitive recursive.

Problem 31.2. Give a detailed proof of ?? along the lines of the first proof of
??

Problem 31.3. Give a detailed proof of ?? along the lines of the alternate proof
of ??

Problem 31.4. Prove ??. You may make use of the fact that any substring of
a formula which is a formula is a sub-formula of it.

Problem 31.5. Prove ??

Problem 31.6. Define the following relations as in ??:

1. FollowsBy∧R (s, s0 , s00 ),

2. FollowsBy= (s, s0 ),

3. FollowsBy∀R (s, s0 ).

Problem 31.7. Define the following relations as in ??:

1. FollowsBy→Elim ( x, d1 , d2 , n),

2. FollowsBy=Elim ( x, d1 , d2 , n),

3. FollowsBy∨Elim ( x, d1 , d2 , d3 , n),

4. FollowsBy∀Intro ( x, d1 , n).

Release: (None) ((None)) 447


CHAPTER 31. ARITHMETIZATION OF SYNTAX

For the last one, you will have to also show that you can test primitive re-
cursively if the formula with Gödel number x and the derivation with Gödel
number d satisfy the eigenvariable condition, i.e., the eigenvariable a of the
∀Intro inference occurs neither in x nor in an open assumption of d.
Problem 31.8. Define the following relations as in ??:

1. IsAx ϕ→(ψ→( ϕ∧ψ)) (n),

2. IsAx∀ x ϕ( x)→ ϕ(t) (n),

3. QR2 (d, i ) (for the other version of QR).

448 Release: (None) ((None))


Chapter 32

Representability in Q

32.1 Introduction
We will describe a very minimal such theory called “Q” (or, sometimes, “Robin-
son’s Q,” after Raphael Robinson). We will say what it means for a function
to be representable in Q, and then we will prove the following:
A function is representable in Q if and only if it is computable.
For one thing, this provides us with another model of computability. But we
will also use it to show that the set { ϕ : Q ` ϕ} is not decidable, by reducing
the halting problem to it. By the time we are done, we will have proved much
stronger things than this.
The language of Q is the language of arithmetic; Q consists of the fol-
lowing axioms (to be used in conjunction with the other axioms and rules of
first-order logic with identity predicate):

∀ x ∀y ( x 0 = y0 → x = y) (Q1 )
∀ x  6= x0 (Q2 )
∀ x ( x 6=  → ∃y x = y0 ) (Q3 )
∀ x ( x + ) = x (Q4 )
∀ x ∀y ( x + y0 ) = ( x + y)0 (Q5 )
∀ x ( x × ) =  (Q6 )
∀ x ∀y ( x × y0 ) = (( x × y) + x ) (Q7 )
∀ x ∀y ( x < y ↔ ∃z (z0 + x ) = y) (Q8 )

For each natural number n, define the numeral n to be the term 000...0 where
there are n tick marks in all. So, 0 is the constant symbol  by itself, 1 is 0 , 2 is
00 , etc.
As a theory of arithmetic, Q is extremely weak; for example, you can’t even
prove very simple facts like ∀ x x 6= x 0 or ∀ x ∀y ( x + y) = (y + x ). But we will

449
CHAPTER 32. REPRESENTABILITY IN Q

see that much of the reason that Q is so interesting is because it is so weak. In


fact, it is just barely strong enough for the incompleteness theorem to hold.
Another reason Q is interesting is because it has a finite set of axioms.
A stronger theory than Q (called Peano arithmetic PA) is obtained by adding
a schema of induction to Q:

( ϕ() ∧ ∀ x ( ϕ( x ) → ϕ( x 0 ))) → ∀ x ϕ( x )

where ϕ( x ) is any formula. If ϕ( x ) contains free variables other than x, we add


universal quantifiers to the front to bind all of them (so that the corresponding
instance of the induction schema is a sentence). For instance, if ϕ( x, y) also
contains the variable y free, the corresponding instance is

∀y (( ϕ() ∧ ∀ x ( ϕ( x ) → ϕ( x 0 ))) → ∀ x ϕ( x ))

Using instances of the induction schema, one can prove much more from the
axioms of PA than from those of Q. In fact, it takes a good deal of work to
find “natural” statements about the natural numbers that can’t be proved in
Peano arithmetic!

Definition 32.1. A function f ( x0 , . . . , xk ) from the natural numbers to the nat-


ural numbers is said to be representable in Q if there is a formula ϕ f ( x0 , . . . , xk , y)
such that whenever f (n0 , . . . , nk ) = m, Q proves

1. ϕ f (n0 , . . . , nk , m)

2. ∀y ( ϕ f (n0 , . . . , nk , y) → m = y).

There are other ways of stating the definition; for example, we could equiv-
alently require that Q proves ∀y ( ϕ f (n0 , . . . , nk , y) ↔ y = m).

Theorem 32.2. A function is representable in Q if and only if it is computable.

There are two directions to proving the theorem. The left-to-right direction
is fairly straightforward once arithmetization of syntax is in place. The other
direction requires more work. Here is the basic idea: we pick “general recur-
sive” as a way of making “computable” precise, and show that every general
recursive function is representable in Q. Recall that a function is general re-
cursive if it can be defined from zero, the successor function succ, and the
projection functions Pin , using composition, primitive recursion, and regular
minimization. So one way of showing that every general recursive function is
representable in Q is to show that the basic functions are representable, and
whenever some functions are representable, then so are the functions defined
from them using composition, primitive recursion, and regular minimization.
In other words, we might show that the basic functions are representable, and
that the representable functions are “closed under” composition, primitive

450 Release: (None) ((None))


32.2. FUNCTIONS REPRESENTABLE IN Q ARE COMPUTABLE

recursion, and regular minimization. This guarantees that every general re-
cursive function is representable.
It turns out that the step where we would show that representable func-
tions are closed under primitive recursion is hard. In order to avoid this step,
we show first that in fact we can do without primitive recursion. That is, we
show that every general recursive function can be defined from basic func-
tions using composition and regular minimization alone. To do this, we show
that primitive recursion can actually be done by a specific regular minimiza-
tion. However, for this to work, we have to add some additional basic func-
tions: addition, multiplication, and the characteristic function of the identity
relation χ= . Then, we can prove the theorem by showing that all of these basic
functions are representable in Q, and the representable functions are closed
under composition and regular minimization.

32.2 Functions Representable in Q are Computable


Lemma 32.3. Every function that is representable in Q is computable.

Proof. Let’s first give the intuitive idea for why this is true. If f ( x0 , . . . , xk ) is
representable in Q, there is a formula ϕ( x0 , . . . , xk , y) such that

Q ` ϕ f ( n0 , . . . , n k , m ) iff m = f ( n0 , . . . , n k ).

To compute f , we do the following. List all the possible derivations δ in the


language of arithmetic. This is possible to do mechanically. For each one,
check if it is a derivation of a formula of the form ϕ f (n0 , . . . , nk , m). If it is, m
must be = f (n0 , . . . , nk ) and we’ve found the value of f . The search terminates
because Q ` ϕ f (n0 , . . . , nk , f (n0 , . . . , nk )), so eventually we find a δ of the right
sort.
This is not quite precise because our procedure operates on derivations
and formulas instead of just on numbers, and we haven’t explained exactly
why “listing all possible derivations” is mechanically possible. But as we’ve
seen, it is possible to code terms, formulas, and derivations by Gödel numbers.
We’ve also introduced a precise model of computation, the general recursive
functions. And we’ve seen that the relation PrfQ (d, y), which holds iff d is
the Gödel number of a derivation of the formula with Gödel number x from
the axioms of Q, is (primitive) recursive. Other primitive recursive functions
we’ll need are num (??) and Subst (??). From these, it is possible to define f by
minimization; thus, f is recursive.
First, define

A ( n0 , . . . , n k , m ) =
Subst(Subst(. . . Subst(# ϕ f # , num(n0 ), # x0 # ),
. . . ), num(nk ), # xk # ), num(m), # y# )

Release: (None) ((None)) 451


CHAPTER 32. REPRESENTABILITY IN Q

This looks complicated, but it’s just the function A(n0 , . . . , nk , m) = # ϕ f (n0 , . . . , nk , m)# .
Now, consider the relation R(n0 , . . . , nk , s) which holds if (s)0 is the Gödel
number of a derivation from Q of ϕ f (n0 , . . . , nk , (s)1 ):

R ( n0 , . . . , n k , s ) iff PrfQ ((s)0 , A(n0 , . . . , nk , (s)1 )

If we can find an s such that R(n0 , . . . , nk , s) hold, we have found a pair of


numbers—(s)0 and (s1 )—such that (s)0 is the Gödel number of a derivation
of A f (n0 , . . . , nk , (s)1 ). So looking for s is like looking for the pair d and m
in the informal proof. And a computable function that “looks for” such an
s can be defined by regular minimization. Note that R is regular: for ev-
ery n0 , . . . , nk , there is a derivation δ of Q ` ϕ f (n0 , . . . , nk , f (n0 , . . . , nk )), so
R(n0 , . . . , nk , s) holds for s = h# δ# , f (n0 , . . . , nk )i. So, we can write f as

f (n0 , . . . , nk ) = (µs R(n0 , . . . , nk , s))1 .

32.3 The Beta Function Lemma


In order to show that we can carry out primitive recursion if addition, multi-
plication, and χ= are available, we need to develop functions that handle se-
quences. (If we had exponentiation as well, our task would be easier.) When
we had primitive recursion, we could define things like the “n-th prime,”
and pick a fairly straightforward coding. But here we do not have primitive
recursion—in fact we want to show that we can do primitive recursion using
minimization—so we need to be more clever.

Lemma 32.4. There is a function β(d, i ) such that for every sequence a0 , . . . , an there
is a number d, such that for every i ≤ n, β(d, i ) = ai . Moreover, β can be defined
from the basic functions using just composition and regular minimization.

Think of d as coding the sequence h a0 , . . . , an i, and β(d, i ) returning the i-th


element. (Note that this “coding” does not use the prower-of-primes coding
we’re already familiar with!). The lemma is fairly minimal; it doesn’t say we
can concatenate sequences or append elements, or even that we can compute
d from a0 , . . . , an using functions definable by composition and regular min-
imization. All it says is that there is a “decoding” function such that every
sequence is “coded.”
The use of the notation β is Gödel’s. To repeat, the hard part of proving
the lemma is defining a suitable β using the seemingly restricted resources,
i.e., using just composition and minimization—however, we’re allowed to use
addition, multiplication, and χ= . There are various ways to prove this lemma,
but one of the cleanest is still Gödel’s original method, which used a number-
theoretic fact called the Chinese Remainder theorem.

452 Release: (None) ((None))


32.3. THE BETA FUNCTION LEMMA

Definition 32.5. Two natural numbers a and b are relatively prime if their great-
est common divisor is 1; in other words, they have no other divisors in com-
mon.
Definition 32.6. a ≡ b mod c means c | ( a − b), i.e., a and b have the same
remainder when divided by c.
Here is the Chinese Remainder theorem:
Theorem 32.7. Suppose x0 , . . . , xn are (pairwise) relatively prime. Let y0 , . . . , yn be
any numbers. Then there is a number z such that

z ≡ y0 mod x0
z ≡ y1 mod x1
..
.
z ≡ yn mod xn .

Here is how we will use the Chinese Remainder theorem: if x0 , . . . , xn are


bigger than y0 , . . . , yn respectively, then we can take z to code the sequence
hy0 , . . . , yn i. To recover yi , we need only divide z by xi and take the remainder.
To use this coding, we will need to find suitable values for x0 , . . . , xn .
A couple of observations will help us in this regard. Given y0 , . . . , yn , let

j = max(n, y0 , . . . , yn ) + 1,

and let

x0 = 1 + j!
x1 = 1 + 2 · j!
x2 = 1 + 3 · j!
..
.
xn = 1 + (n + 1) · j!

Then two things are true:


1. x0 , . . . , xn are relatively prime.

2. For each i, yi < xi .


To see that (1) is true, note that if p is a prime number and p | xi and p | xk ,
then p | 1 + (i + 1) j! and p | 1 + (k + 1) j!. But then p divides their difference,

(1 + (i + 1) j!) − (1 + (k + 1) j!) = (i − k) j!.

Since p divides 1 + (i + 1) j!, it can’t divide j! as well (otherwise, the first divi-
sion would leave a remainder of 1). So p divides i − k, since p divides (i − k) j!.

Release: (None) ((None)) 453


CHAPTER 32. REPRESENTABILITY IN Q

But |i − k| is at most n, and we have chosen j > n, so this implies that p | j!,
again a contradiction. So there is no prime number dividing both xi and xk .
Clause (2) is easy: we have yi < j < j! < xi .
Now let us prove the β function lemma. Remember that we can use 0,
successor, plus, times, χ= , projections, and any function defined from them
using composition and minimization applied to regular functions. We can
also use a relation if its characteristic function is so definable. As before we can
show that these relations are closed under boolean combinations and bounded
quantification; for example:

1. not( x ) = χ= ( x, 0)

2. (min x ≤ z) R( x, y) = µx ( R( x, y) ∨ x = z)

3. (∃ x ≤ z) R( x, y) ⇔ R((min x ≤ z) R( x, y), y)

We can then show that all of the following are also definable without primitive
recursion:

1. The pairing function, J ( x, y) = 21 [( x + y)( x + y + 1)] + x

2. Projections

K (z) = (min x ≤ q) (∃y ≤ z [z = J ( x, y)])

and
L(z) = (min y ≤ q) (∃ x ≤ z [z = J ( x, y)]).

3. x < y

4. x | y

5. The function rem( x, y) which returns the remainder when y is divided


by x

Now define
β∗ (d0 , d1 , i ) = rem(1 + (i + 1)d1 , d0 )
and
β(d, i ) = β∗ (K (d), L(d), i ).
This is the function we need. Given a0 , . . . , an , as above, let

j = max(n, a0 , . . . , an ) + 1,

and let d1 = j!. By the observations above, we know that 1 + d1 , 1 + 2d1 , . . . , 1 +


(n + 1)d1 are relatively prime and all are bigger than a0 , . . . , an . By the Chinese
Remainder theorem there is a value d0 such that for each i,

d0 ≡ a i mod (1 + (i + 1)d1 )

454 Release: (None) ((None))


32.4. SIMULATING PRIMITIVE RECURSION

and so (because d1 is greater than ai ),

ai = rem(1 + (i + 1)d1 , d0 ).

Let d = J (d0 , d1 ). Then for each i ≤ n, we have

β(d, i ) = β ∗ ( d0 , d1 , i )
= rem(1 + (i + 1)d1 , d0 )
= ai
which is what we need. This completes the proof of the β-function lemma.

32.4 Simulating Primitive Recursion


Now we can show that definition by primitive recursion can be “simulated”
by regular minimization using the beta function. Suppose we have f (~z) and
g(u, v, ~z). Then the function h( x, ~z) defined from f and g by primitive recur-
sion is

h(0, ~z) = f (~z)


h( x + 1, ~z) = g( x, h( x, ~z), ~z).

We need to show that h can be defined from f and g using just composition
and regular minimization, using the basic functions and functions defined
from them using composition and regular minimization (such as β).
Lemma 32.8. If h can be defined from f and g using primitive recursion, it can be
defined from f , g, the functions zero, succ, Pin , add, mult, χ= , using composition
and regular minimization.

Proof. First, define an auxiliary function ĥ( x, ~z) which returns the least num-
ber d such that d codes a sequence which satisfies
1. (d)0 = f (~z), and
2. for each i < x, (d)i+1 = g(i, (d)i , ~z),
where now (d)i is short for β(d, i ). In other words, ĥ returns the sequence
hh(0, ~z), h(1, ~z), . . . , h( x, ~z)i. We can write ĥ as
ĥ( x, z) = µd ( β(d, 0) = f (~z) ∧ ∀i < x β(d, i + 1) = g(i, β(d, i ), ~z)).

Note: no primitive recursion is needed here, just minimization. The function


we minimize is regular because of the beta function lemma ??.
But now we have
h( x, ~z) = β(ĥ( x, ~z), x ),
so h can be defined from the basic functions using just composition and regu-
lar minimization.

Release: (None) ((None)) 455


CHAPTER 32. REPRESENTABILITY IN Q

32.5 Basic Functions are Representable in Q


First we have to show that all the basic functions are representable in Q. In the
end, we need to show how to assign to each k-ary basic function f ( x0 , . . . , xk−1 )
a formula ϕ f ( x0 , . . . , xk−1 , y) that represents it.
We will be able to represent zero, successor, plus, times, the characteristic
function for equality, and projections. In each case, the appropriate represent-
ing function is entirely straightforward; for example, zero is represented by
the formula y = , successor is represented by the formula x00 = y, and addi-
tion is represented by the formula ( x0 + x1 ) = y. The work involves showing
that Q can prove the relevant sentences; for example, saying that addition
is represented by the formula above involves showing that for every pair of
natural numbers m and n, Q proves

n + m = n + m and
∀y ((n + m) = y → y = n + m).

Proposition 32.9. The zero function zero( x ) = 0 is represented in Q by y = .

Proposition 32.10. The successor function succ( x ) = x + 1 is represented in Q by


y = x0 .

Proposition 32.11. The projection function Pin ( x0 , . . . , xn−1 ) = xi is represented


in Q by y = xi .

Proposition 32.12. The characteristic function of =,


(
1 if x0 = x1
χ = ( x0 , x1 ) =
0 otherwise

is represented in Q by

( x0 = x1 ∧ y = 1) ∨ ( x0 6 = x1 ∧ y = 0).

The proof requires the following lemma.

Lemma 32.13. Given natural numbers n and m, if n 6= m, then Q ` n 6= m.

Proof. Use induction on n to show that for every m, if n 6= m, then Q ` n 6= m.


In the base case, n = 0. If m is not equal to 0, then m = k + 1 for some
natural number k. We have an axiom that says ∀ x 0 6= x 0 . By a quantifier
0 0
axiom, replacing x by k, we can conclude 0 6= k . But k is just m.
In the induction step, we can assume the claim is true for n, and consider
n + 1. Let m be any natural number. There are two possibilities: either m = 0
or for some k we have m = k + 1. The first case is handled as above. In the
second case, suppose n + 1 6= k + 1. Then n 6= k. By the induction hypothesis

456 Release: (None) ((None))


32.5. BASIC FUNCTIONS ARE REPRESENTABLE IN Q

for n we have Q ` n 6= k. We have an axiom that says ∀ x ∀y x 0 = y0 → x = y.


0
Using a quantifier axiom, we have n0 = k → n = k. Using propositional
0
logic, we can conclude, in Q, n 6= k → n0 6= k . Using modus ponens, we can
0 0
conclude n0 6= k , which is what we want, since k is m.

Note that the lemma does not say much: in essence it says that Q can
prove that different numerals denote different objects. For example, Q proves
000 6= 0000 . But showing that this holds in general requires some care. Note also
that although we are using induction, it is induction outside of Q.

Proof of ??. If n = m, then n and m are the same term, and χ= (n, m) = 1. But
Q ` (n = m ∧ 1 = 1), so it proves ϕ= (n, m, 1). If n 6= m, then χ= (n, m) = 0.
By ??, Q ` n 6= m and so also (n 6= m ∧  = ). Thus Q ` ϕ= (n, m, 0).
For the second part, we also have two cases. If n = m, we have to show that
that Q ` ∀( ϕ= (n, m, y) → y = 1). Arguing informally, suppose ϕ= (n, m, y),
i.e.,
( n = n ∧ y = 1) ∨ ( n 6 = n ∧ y = 0)
The left disjunct implies y = 1 by logic; the right contradicts n = n which is
provable by logic.
Suppose, on the other hand, that n 6= m. Then ϕ= (n, m, y) is

( n = m ∧ y = 1) ∨ ( n 6 = m ∧ y = 0)

Here, the left disjunct contradicts n 6= m, which is provable in Q by ??; the


right disjunct entails y = 0.

Proposition 32.14. The addition function add( x0 , x1 ) = x0 + x1 is is represented


in Q by
y = ( x0 + x1 ).

Lemma 32.15. Q ` (n + m) = n + m

Proof. We prove this by induction on m. If m = 0, the claim is that Q ` (n +


) = n. This follows by axiom Q4 . Now suppose the claim for m; let’s prove
the claim for m + 1, i.e., prove that Q ` (n + m + 1) = n + m + 1. Note that
0
m + 1 is just m0 , and n + m + 1 is just n + m . By axiom Q5 , Q ` (n + m0 ) =
(n + m) . By induction hypothesis, Q ` (n + m) = n + m. So Q ` (n + m0 ) =
0
0
n+m .

Proof of ??. The formula ϕadd ( x0 , x1 , y) representing add is y = ( x0 + x1 ). First


we show that if add(n, m) = k, then Q ` ϕadd (n, m, k), i.e., Q ` k = (n + m).
But since k = n + m, k just is n + m, and we’ve shown in ?? that Q ` (n + m) =
n + m.
We also have to show that if add(n, m) = k, then

Q ` ∀y ( ϕadd (n, m, y) → y = k).

Release: (None) ((None)) 457


CHAPTER 32. REPRESENTABILITY IN Q

Suppose we have n + m = y. Since

Q ` (n + m) = n + m,

we can replace the left side with n + m and get n + m = y, for arbitrary y.

Proposition 32.16. The multiplication function mult( x0 , x1 ) = x0 · x1 is repre-


sented in Q by
y = ( x0 × x1 ).

Proof. Exercise.

Lemma 32.17. Q ` (n × m) = n · m

Proof. Exercise.

32.6 Composition is Representable in Q


Suppose h is defined by

h( x0 , . . . , xl −1 ) = f ( g0 ( x0 , . . . , xl −1 ), . . . , gk−1 ( x0 , . . . , xl −1 )).

where we have already found formulas ϕ f , ϕ g0 , . . . , ϕ gk−1 representing the func-


tions f , and g0 , . . . , gk−1 , respectively. We have to find a formula ϕh represent-
ing h.
Let’s start with a simple case, where all functions are 1-place, i.e., consider
h( x ) = f ( g( x )). If ϕ f (y, z) represents f , and ϕ g ( x, y) represents g, we need
a formula ϕh ( x, z) that represents h. Note that h( x ) = z iff there is a y such
that both z = f (y) and y = g( x ). (If h( x ) = z, then g( x ) is such a y; if such a
y exists, then since y = g( x ) and z = f (y), z = f ( g( x )).) This suggests that
∃y ( ϕ g ( x, y) ∧ ϕ f (y, z)) is a good candidate for ϕh ( x, z). We just have to verify
that Q proves the relevant formulas.

Proposition 32.18. If h(n) = m, then Q ` ϕh (n, m).

Proof. Suppose h(n) = m, i.e., f ( g(n)) = m. Let k = g(n). Then

Q ` ϕ g (n, k)

since ϕ g represents g, and

Q ` ϕ f (k, m)

since ϕ f represents f . Thus,

Q ` ϕ g (n, k) ∧ ϕ f (k, m)

458 Release: (None) ((None))


32.7. REGULAR MINIMIZATION IS REPRESENTABLE IN Q

and consequently also

Q ` ∃y ( ϕ g (n, y) ∧ ϕ f (y, m)),


i.e., Q ` ϕh (n, m).

Proposition 32.19. If h(n) = m, then Q ` ∀z ( ϕh (n, z) → z = m).

Proof. Suppose h(n) = m, i.e., f ( g(n)) = m. Let k = g(n). Then


Q ` ∀y ( ϕ g (n, y) → y = k )

since ϕ g represents g, and

Q ` ∀z ( ϕ f (k, z) → z = m)

since ϕ f represents f . Using just a little bit of logic, we can show that also

Q ` ∀z (∃y ( ϕ g (n, y) ∧ ϕ f (y, z)) → z = m).


i.e., Q ` ∀y ( ϕh (n, y) → y = m).

The same idea works in the more complex case where f and gi have arity
greater than 1.
Proposition 32.20. If ϕ f (y0 , . . . , yk−1 , z) represents f (y0 , . . . , yk−1 ) in Q, and
ϕ gi ( x0 , . . . , xl −1 , y) represents gi ( x0 , . . . , xl −1 ) in Q, then

∃ y 0 , . . . ∃ y k − 1 ( ϕ g0 ( x 0 , . . . , x l − 1 , y 0 ) ∧ · · · ∧
ϕ gk−1 ( x0 , . . . , xl −1 , yk−1 ) ∧ ϕ f (y0 , . . . , yk−1 , z))
represents
h( x0 , . . . , xk−1 ) = f ( g0 ( x0 , . . . , xk−1 ), . . . , g0 ( x0 , . . . , xk−1 )).
Proof. Exercise.

32.7 Regular Minimization is Representable in Q


Let’s consider unbounded search. Suppose g( x, z) is regular and representable
in Q, say by the formula ϕ g ( x, z, y). Let f be defined by f (z) = µx [ g( x, z) =
0]. We would like to find a formula ϕ f (z, y) representing f . The value of f (z)
is that number x which (a) satisfies g( x, z) = 0 and (b) is the least such, i.e.,
for any w < x, g(w, z) 6= 0. So the following is a natural choice:
ϕ f (z, y) ≡ ϕ g (y, z, 0) ∧ ∀w (w < y → ¬ ϕ g (w, z, 0)).
In the general case, of course, we would have to replace z with z0 , . . . , zk .
The proof, again, will involve some lemmas about things Q is strong enough
to prove.

Release: (None) ((None)) 459


CHAPTER 32. REPRESENTABILITY IN Q

Lemma 32.21. For every variable x and every natural number n,

Q ` ( x 0 + n) = ( x + n)0 .

Proof. The proof is, as usual, by induction on n. In the base case, n = 0, we


need to show that Q proves ( x 0 + 0) = ( x + 0)0 . But we have:

Q ` ( x 0 + 0) = x 0 by axiom Q4 (32.1)
Q ` ( x + 0) = x by axiom Q4 (32.2)
0 0
Q ` ( x + 0) = x by ?? (32.3)
0 0
Q ` ( x + 0) = ( x + 0) by ?? and ??

In the induction step, we can assume that we have shown that Q ` ( x 0 + n) =


( x + n)0 . Since n + 1 is n0 , we need to show that Q proves ( x 0 + n0 ) = ( x + n0 )0 .
We have:

Q ` ( x 0 + n0 ) = ( x 0 + n)0 by axiom Q5 (32.4)


Q ` ( x 0 + n0 ) = ( x + n0 )0 inductive hypothesis (32.5)
Q ` ( x 0 + n)0 = ( x + n0 )0 by ?? and ??.

It is again worth mentioning that this is weaker than saying that Q proves
∀ x ∀y ( x 0 + y) = ( x + y)0 . Although this sentence is true in N, Q does not
prove it.

Lemma 32.22. 1. Q ` ∀ x ¬ x < .

2. For every natural number n,

Q ` ∀ x ( x < n + 1 → ( x =  ∨ · · · ∨ x = n)).

Proof. Let us do 1 and part of 2, informally (i.e., only giving hints as to how to
construct the formal derivation).
For part 1, by the definition of <, we need to prove ¬∃y (y0 + x ) = 
in Q, which is equivalent (using the axioms and rules of first-order logic) to
∀y (y0 + x ) 6= 0. Here is the idea: suppose (y0 + x ) = . If x = , we have
(y0 + ) = . But by axiom Q4 of Q, we have (y0 + ) = y0 , and by axiom Q2
we have y0 6= , a contradiction. So ∀y (y0 + x ) 6= . If x 6= , by axiom Q3 ,
there is a z such that x = z0 . But then we have (y0 + z0 ) = 0. By axiom Q5 , we
have (y0 + z)0 = , again contradicting axiom Q2 .
For part 2, use induction on n. Let us consider the base case, when n = 0.
In that case, we need to show x < 1 → x = . Suppose x < 1. Then by the
defining axiom for <, we have ∃y (y0 + x ) = 0 . Suppose y has that property;
so we have y0 + x = 0 .

460 Release: (None) ((None))


32.7. REGULAR MINIMIZATION IS REPRESENTABLE IN Q

We need to show x = . By axiom Q3 , if x 6= , we get x = z0 for some z.


Then we have (y0 + z0 ) = 0 . By axiom Q5 of Q, we have (y0 + z)0 = 0 .
By axiom Q1 , we have (y0 + z) = . But this means, by definition, z < ,
contradicting part 1.

Lemma 32.23. For every m ∈ N,


Q ` ∀y ((y < m ∨ m < y) ∨ y = m).
Proof. By induction on m. First, consider the case m = 0. Q ` ∀y (y 6=  →
∃z y = z0 ) by Q3 . But if y = z0 , then (z0 + ) = (y + ) by the logic of =. By
Q4 , (y + ) = y, so we have (z0 + ) = y, and hence ∃z (z0 + ) = y. By the
definition of < in Q8 ,  < y. If  < y, then also  < y ∨ y < . We obtain:
y 6=  → ( < y ∨ y < ), which is equivalent to ( < y ∨ y < ) ∨ y = .
Now suppose we have
Q ` ∀y ((y < m ∨ m < y) ∨ y = m)

and we want to show

Q ` ∀y ((y < m + 1 ∨ m + 1 < y) ∨ y = m + 1)


The first disjunct y < m is equivalent (by Q8 ) to ∃z (z0 + y) = m. If (z0 + y) =
m, then also (z0 + y)0 = m0 . By Q4 , (z0 + y)0 = (z00 + y). Hence, (z00 + y) = m0 .
We get ∃u (u0 + y) = m + 1 by existentially generalizing on z0 and keeping in
mind that m0 is m + 1. Hence, if y < m then y < m + 1.
Now suppose m < y, i.e., ∃z (z0 + m) = y. By Q3 and some logic, we have
z =  ∨ ∃u z = u0 . If z = , we have (0 + m) = y. Since Q ` (0 + m) = m + 1,
we have y = m + 1. Now suppose ∃u z = u0 . Then:
y = (z0 + m) by assumption
0
(z + m) = (u + m) from z = u0
00

(u00 + m) = (u0 + m)0 by ??


(u0 + m)0 = (u0 + m0 ) by Q5 , so
y = ( u 0 + m + 1)

By existential generalization, ∃u (u0 + m + 1) = y, i.e., m + 1 < y. So, if m < y,


then m + 1 < y ∨ y = m + 1.
Finally, assume y = m. Then, since Q ` (0 + m) = m + 1, (0 + y) =
m + 1. From this we get ∃z (z0 + y) = m + 1, or y < m + 1.
Hence, from each disjunct of the case for m, we can obtain the case for m +
1.

Proposition 32.24. If ϕ g ( x, z, y) represents g( x, y) in Q, then


ϕ f (z, y) ≡ ϕ g (y, z, ) ∧ ∀w (w < y → ¬ ϕ g (w, z, )).
represents f (z) = µx [ g( x, z) = 0].

Release: (None) ((None)) 461


CHAPTER 32. REPRESENTABILITY IN Q

Proof. First we show that if f (n) = m, then Q ` ϕ f (n, m), i.e.,

Q ` ϕ g (m, n, ) ∧ ∀w (w < m → ¬ ϕ g (w, n, )).

Since ϕ g ( x, z, y) represents g( x, z) and g(m, n) = 0 if f (n) = m, we have

Q ` ϕ g (m, n, ).

If f (n) = m, then for every k < m, g(k, n) 6= 0. So

Q ` ¬ ϕ g (k, n, ).

We get that

Q ` ∀w (w < m → ¬ ϕ g (w, n, )). (32.6)

by ?? (by (1) in case m = 0 and by (2) otherwise).


Now let’s show that if f (n) = m, then Q ` ∀y ( ϕ f (n, y) → y = m). We
again sketch the argument informally, leaving the formalization to the reader.
Suppose ϕ f (n, y). From this we get (a) ϕ g (y, n, ) and (b) ∀w (w < y →
¬ ϕ g (w, n, )). By ??, (y < m ∨ m < y) ∨ y = m. We’ll show that both y < m
and m < y leads to a contradiction.
If m < y, then ¬ ϕ g (m, n, ) from (b). But m = f (n), so g(m, n) = 0, and so
Q ` ϕ g (m, n, ) since ϕ g represents g. So we have a contradiction.
Now suppose y < m. Then since Q ` ∀w (w < m → ¬ ϕ g (w, n, )) by ??,
we get ¬ ϕ g (y, n, ). This again contradicts (a).

32.8 Computable Functions are Representable in Q


Theorem 32.25. Every computable function is representable in Q.

Proof. For definiteness, and using the Church-Turing Thesis, let’s say that a
function is computable iff it is general recursive. The general recursive func-
tions are those which can be defined from the zero function zero, the successor
function succ, and the projection function Pin using composition, primitive re-
cursion, and regular minimization. By ??, any function h that can be defined
from f and g can also be defined using composition and regular minimiza-
tion from f , g, and zero, succ, Pin , add, mult, χ= . Consequently, a function is
general recursive iff it can be defined from zero, succ, Pin , add, mult, χ= using
composition and regular minimization.
We’ve furthermore shown that the basic functions in question are repre-
sentable in Q (????????????), and that any function defined from representable
functions by composition or regular minimization (??, ??) is also representable.
Thus every general recursive function is representable in Q.

462 Release: (None) ((None))


32.9. REPRESENTING RELATIONS

We have shown that the set of computable functions can be characterized


as the set of functions representable in Q. In fact, the proof is more general.
From the definition of representability, it is not hard to see that any theory
extending Q (or in which one can interpret Q) can represent the computable
functions. But, conversely, in any proof system in which the notion of proof is
computable, every representable function is computable. So, for example, the
set of computable functions can be characterized as the set of functions repre-
sentable in Peano arithmetic, or even Zermelo-Fraenkel set theory. As Gödel
noted, this is somewhat surprising. We will see that when it comes to prov-
ability, questions are very sensitive to which theory you consider; roughly,
the stronger the axioms, the more you can prove. But across a wide range
of axiomatic theories, the representable functions are exactly the computable
ones; stronger theories do not represent more functions as long as they are
axiomatizable.

32.9 Representing Relations


Let us say what it means for a relation to be representable.

Definition 32.26. A relation R( x0 , . . . , xk ) on the natural numbers is repre-


sentable in Q if there is a formula ϕ R ( x0 , . . . , xk ) such that whenever R(n0 , . . . , nk )
is true, Q proves ϕ R (n0 , . . . , nk ), and whenever R(n0 , . . . , nk ) is false, Q proves
¬ ϕ R ( n0 , . . . , n k ).

Theorem 32.27. A relation is representable in Q if and only if it is computable.

Proof. For the forwards direction, suppose R( x0 , . . . , xk ) is represented by the


formula ϕ R ( x0 , . . . , xk ). Here is an algorithm for computing R: on input n0 ,
. . . , nk , simultaneously search for a proof of ϕ R (n0 , . . . , nk ) and a proof of
¬ ϕ R (n0 , . . . , nk ). By our hypothesis, the search is bound to find one or the
other; if it is the first, report “yes,” and otherwise, report “no.”
In the other direction, suppose R( x0 , . . . , xk ) is computable. By defini-
tion, this means that the function χ R ( x0 , . . . , xk ) is computable. By ??, χ R is
represented by a formula, say ϕχR ( x0 , . . . , xk , y). Let ϕ R ( x0 , . . . , xk ) be the
formula ϕχR ( x0 , . . . , xk , 1). Then for any n0 , . . . , nk , if R(n0 , . . . , nk ) is true,
then χ R (n0 , . . . , nk ) = 1, in which case Q proves ϕχR (n0 , . . . , nk , 1), and so
Q proves ϕ R (n0 , . . . , nk ). On the other hand, if R(n0 , . . . , nk ) is false, then
χ R (n0 , . . . , nk ) = 0. This means that Q proves

∀ y ( ϕ χ R ( n0 , . . . , n k , y ) → y = 0).

Since Q proves 0 6= 1, Q proves ¬ ϕχR (n0 , . . . , nk , 1), and so it proves ¬ ϕ R (n0 , . . . , nk ).

Release: (None) ((None)) 463


CHAPTER 32. REPRESENTABILITY IN Q

32.10 Undecidability
We call a theory T undecidable if there is no computational procedure which, af-
ter finitely many steps and unfailingly, provides a correct answer to the ques-
tion “does T prove ϕ?” for any sentence ϕ in the language of T. So Q would
be decidable iff there were a computational procedure which decides, given a
sentence ϕ in the language of arithmetic, whether Q ` ϕ or not. We can make
this more precise by asking: Is the relation ProvQ (y), which holds of y iff y is
the Gödel number of a sentence provable in Q, recursive? The answer is: no.

Theorem 32.28. Q is undecidable, i.e., the relation

ProvQ (y) ⇔ Sent(y) ∧ ∃ x PrfQ ( x, y)

is not recursive.

Proof. Suppose it were. Then we could solve the halting problem as follows:
Given e and n, we know that ϕe (n) ↓ iff there is an s such that T (e, n, s),
where T is Kleene’s predicate from ??. Since T is primitive recursive it is
representable in Q by a formula ψT , that is, Q ` ψT (e, n, s) iff T (e, n, s). If
Q ` ψT (e, n, s) then also Q ` ∃y ψT (e, n, y). If no such s exists, then Q `
¬ψT (e, n, s) for every s. But Q is ω-consistent, i.e., if Q ` ¬ ϕ(n) for ev-
ery n ∈ N, then Q 0 ∃y ϕ(y). We know this because the axioms of Q
are true in the standard model N. So, Q 0 ∃y ψT (e, n, y). In other words,
Q ` ∃y ψT (e, n, y) iff there is an s such that T (e, n, s), i.e., iff ϕe (n) ↓. From
e and n we can compute # ∃y ψT (e, n, y)# , let g(e, n) be the primitive recursive
function which does that. So
(
1 if ProvQ ( g(e, n))
h(e, n) =
0 otherwise.

This would show that h is recursive if ProvQ is. But h is not recursive, by ??,
so ProvQ cannot be either.

Corollary 32.29. First-order logic is undecidable.

Proof. If first-order logic were decidable, provability in Q would be as well,


since Q ` ϕ iff ` ω → ϕ, where ω is the conjunction of the axioms of Q.

Problems
Problem 32.1. Prove that y = , y = x 0 , and y = xi represent zero, succ, and
Pin , respectively.

Problem 32.2. Prove ??.

Problem 32.3. Use ?? to prove ??.

464 Release: (None) ((None))


32.10. UNDECIDABILITY

Problem 32.4. Using the proofs of ?? and ?? as a guide, carry out the proof of
?? in detail.

Problem 32.5. Show that if R is representable in Q, so is χ R .

This chapter depends on material in the chapter on computability the-


ory, but can be left out if that hasn’t been covered. It’s currently a basic
conversion of Jeremy Avigad’s notes, has not been revised, and is missing
exercises.

Release: (None) ((None)) 465


Chapter 33

Theories and Computability

33.1 Introduction

This section should be rewritten.

We have the following:

1. A definition of what it means for a function to be representable in Q (??)

2. a definition of what it means for a relation to be representable in Q (??)

3. a theorem asserting that the representable functions of Q are exactly the


computable ones (??)

4. a theorem asserting that the representable relations of Q are exactly the


computable ones ??)

A theory is a set of sentences that is deductively closed, that is, with the
property that whenever T proves ϕ then ϕ is in T. It is probably best to think
of a theory as being a collection of sentences, together with all the things that
these sentences imply. From now on, I will use Q to refer to the theory consist-
ing of the set of sentences derivable from the eight axioms in ??. Remember
that we can code formula of Q as numbers; if ϕ is such a formula, let # ϕ#
denote the number coding ϕ. Modulo this coding, we can now ask whether
various sets of formulas are computable or not.

33.2 Q is C.e.-Complete
Theorem 33.1. Q is c.e. but not decidable. In fact, it is a complete c.e. set.

466
33.3. ω-CONSISTENT EXTENSIONS OF Q ARE UNDECIDABLE

Proof. It is not hard to see that Q is c.e., since it is the set of (codes for) sen-
tences y such that there is a proof x of y in Q:

Q = {y : ∃ x PrfQ ( x, y)}.

But we know that PrfQ ( x, y) is computable (in fact, primitive recursive), and
any set that can be written in the above form is c.e.
Saying that it is a complete c.e. set is equivalent to saying that K ≤m Q,
where K = { x : ϕ x ( x ) ↓}. So let us show that K is reducible to Q. Since
Kleene’s predicate T (e, x, s) is primitive recursive, it is representable in Q, say,
by ϕ T . Then for every x, we have

x ∈ K → ∃s T ( x, x, s)
→ ∃s (Q ` ϕ T ( x, x, s))
→ Q ` ∃s ϕ T ( x, x, s).

Conversely, if Q ` ∃s ϕ T ( x, x, s), then, in fact, for some natural number n the


formula ϕ T ( x, x, n) must be true. Now, if T ( x, x, n) were false, Q would prove
¬ ϕ T ( x, x, n), since ϕ T represents T. But then Q proves a false formula, which
is a contradiction. So T ( x, x, n) must be true, which implies ϕ x ( x ) ↓.
In short, we have that for every x, x is in K if and only if Q proves ∃s T ( x, x, s).
So the function f which takes x to (a code for) the sentence ∃s T ( x, x, s) is a re-
duction of K to Q.

33.3 ω-Consistent Extensions of Q are Undecidable


The proof that Q is c.e.-complete relied on the fact that any sentence prov-
able in Q is “true” of the natural numbers. The next definition and theorem
strengthen this theorem, by pinpointing just those aspects of “truth” that were
needed in the proof above. Don’t dwell on this theorem too long, though, be-
cause we will soon strengthen it even further. We include it mainly for histori-
cal purposes: Gödel’s original paper used the notion of ω-consistency, but his
result was strengthened by replacing ω-consistency with ordinary consistency
soon after.

Definition 33.2. A theory T is ω-consistent if the following holds: if ∃ x ϕ( x )


is any sentence and T proves ¬ ϕ(0), ¬ ϕ(1), ¬ ϕ(2), . . . then T does not prove
∃ x ϕ ( x ).
Theorem 33.3. Let T be any ω-consistent theory that includes Q. Then T is not
decidable.

Proof. If T includes Q, then T represents the computable functions and rela-


tions. We need only modify the previous proof. As above, if x ∈ K, then
T proves ∃s ϕ T ( x, x, s). Conversely, suppose T proves ∃s ϕ T ( x, x, s). Then x

Release: (None) ((None)) 467


CHAPTER 33. THEORIES AND COMPUTABILITY

must be in K: otherwise, there is no halting computation of machine x on input


x; since ϕ T represents Kleene’s T relation, T proves ¬ ϕ T ( x, x, 0), ¬ ϕ T ( x, x, 1),
. . . , making T ω-inconsistent.

33.4 Consistent Extensions of Q are Undecidable


Remember that a theory is consistent if it does not prove both ϕ and ¬ ϕ for
any formula ϕ. Since anything follows from a contradiction, an inconsistent
theory is trivial: every sentence is provable. Clearly, if a theory if ω-consistent,
then it is consistent. But being consistent is a weaker requirement (i.e., there
are theories that are consistent but not ω-consistent.). We can weaken the
assumption in ?? to simple consistency to obtain a stronger theorem.
Lemma 33.4. There is no “universal computable relation.” That is, there is no binary
computable relation R( x, y), with the following property: whenever S(y) is a unary
computable relation, there is some k such that for every y, S(y) is true if and only if
R(k, y) is true.

Proof. Suppose R( x, y) is a universal computable relation. Let S(y) be the


relation ¬ R(y, y). Since S(y) is computable, for some k, S(y) is equivalent to
R(k, y). But then we have that S(k) is equivalent to both R(k, k ) and ¬ R(k, k),
which is a contradiction.

Theorem 33.5. Let T be any consistent theory that includes Q. Then T is not decid-
able.

Proof. Suppose T is a consistent, decidable extension of Q. We will obtain a


contradiction by using T to define a universal computable relation.
Let R( x, y) hold if and only if
x codes a formula θ (u), and T proves θ (y).
Since we are assuming that T is decidable, R is computable. Let us show that
R is universal. If S(y) is any computable relation, then it is representable in Q
(and hence T) by a formula θS (u). Then for every n, we have

S(n) → T ` θS (n)
→ R (# θ S ( u )# , n )
and

¬S(n) → T ` ¬θS (n)


→ T 6` θS (n) (since T is consistent)
→ ¬ R (# θ S ( u )# , n ).
That is, for every y, S(y) is true if and only if R(# θS (u)# , y) is. So R is universal,
and we have the contradiction we were looking for.

468 Release: (None) ((None))


33.5. AXIOMATIZABLE THEORIES

Let “true arithmetic” be the theory { ϕ : N  ϕ}, that is, the set of sentences
in the language of arithmetic that are true in the standard interpretation.

Corollary 33.6. True arithmetic is not decidable.

33.5 Axiomatizable Theories


A theory T is said to be axiomatizable if it has a computable set of axioms A.
(Saying that A is a set of axioms for T means T = { ϕ : A ` ϕ}.) Any “rea-
sonable” axiomatization of the natural numbers will have this property. In
particular, any theory with a finite set of axioms is axiomatizable.

Lemma 33.7. Suppose T is axiomatizable. Then T is computably enumerable.

Proof. Suppose A is a computable set of axioms for T. To determine if ϕ ∈ T,


just search for a proof of ϕ from the axioms.
Put slightly differently, ϕ is in T if and only if there is a finite list of axioms
ψ1 , . . . , ψk in A and a proof of (ψ1 ∧ · · · ∧ ψk ) → ϕ in first-order logic. But we
already know that any set with a definition of the form “there exists . . . such
that . . . ” is c.e., provided the second “. . . ” is computable.

33.6 Axiomatizable Complete Theories are Decidable


A theory is said to be complete if for every sentence ϕ, either ϕ or ¬ ϕ is prov-
able.

Lemma 33.8. Suppose a theory T is complete and axiomatizable. Then T is decidable.

Proof. Suppose T is complete and A is a computable set of axioms. If T is


inconsistent, it is clearly computable. (Algorithm: “just say yes.”) So we can
assume that T is also consistent.
To decide whether or not a sentence ϕ is in T, simultaneously search for a
proof of ϕ from A and a proof of ¬ ϕ. Since T is complete, you are bound to
find one or another; and since T is consistent, if you find a proof of ¬ ϕ, there
is no proof of ϕ.
Put in different terms, we already know that T is c.e.; so by a theorem we
proved before, it suffices to show that the complement of T is c.e. also. But a
formula ϕ is in T̄ if and only if ¬ ϕ is in T; so T̄ ≤m T.

33.7 Q has no Complete, Consistent, Axiomatizable


Extensions
Theorem 33.9. There is no complete, consistent, axiomatizable extension of Q.

Release: (None) ((None)) 469


CHAPTER 33. THEORIES AND COMPUTABILITY

Proof. We already know that there is no consistent, decidable extension of Q.


But if T is complete and axiomatized, then it is decidable.

This theorems is not that far from Gödel’s original 1931 formulation of the
First Incompleteness Theorem. Aside from the more modern terminology, the
key differences are this: Gödel has “ω-consistent” instead of “consistent”; and
he could not say “axiomatizable” in full generality, since the formal notion of
computability was not in place yet. (The formal models of computability were
developed over the following decade, including by Gödel, and in large part to
be able to characterize the kinds of theories that are susceptible to the Gödel
phenomenon.)
The theorem says you can’t have it all, namely, completeness, consistency,
and axiomatizability. If you give up any one of these, though, you can have
the other two: Q is consistent and computably axiomatized, but not com-
plete; the inconsistent theory is complete, and computably axiomatized (say,
by {0 6= 0}), but not consistent; and the set of true sentence of arithmetic is
complete and consistent, but it is not computably axiomatized.

33.8 Sentences Provable and Refutable in Q are Computably


Inseparable
Let Q̄ be the set of sentences whose negations are provable in Q, i.e., Q̄ = { ϕ :
Q ` ¬ ϕ}. Remember that disjoint sets A and B are said to be computably
inseparable if there is no computable set C such that A ⊆ C and B ⊆ C.
Lemma 33.10. Q and Q̄ are computably inseparable.

Proof. Suppose C is a computable set such that Q ⊆ C and Q̄ ⊆ C. Let R( x, y)


be the relation
x codes a formula θ (u) and θ (y) is in C.
We will show that R( x, y) is a universal computable relation, yielding a con-
tradiction.
Suppose S(y) is computable, represented by θS (u) in Q. Then

S(n) → Q ` θS (n)
→ θS (n) ∈ C

and

¬S(n) → Q ` ¬θS (n)


→ θS (n) ∈ Q̄
→ θS (n) 6∈ C

So S(y) is equivalent to R(#(θS (u)), y).

470 Release: (None) ((None))


33.9. THEORIES CONSISTENT WITH Q ARE UNDECIDABLE

33.9 Theories Consistent with Q are Undecidable


The following theorem says that not only is Q undecidable, but, in fact, any
theory that does not disagree with Q is undecidable.

Theorem 33.11. Let T be any theory in the language of arithmetic that is consistent
with Q (i.e., T ∪ Q is consistent). Then T is undecidable.

Proof. Remember that Q has a finite set of axioms, Q1 , . . . , Q8 . We can even


replace these by a single axiom, α = Q1 ∧ · · · ∧ Q8 .
Suppose T is a decidable theory consistent with Q. Let

C = { ϕ : T ` α → ϕ }.

We show that C would be a computable separation of Q and Q̄, a contra-


diction. First, if ϕ is in Q, then ϕ is provable from the axioms of Q; by the
deduction theorem, there is a proof of α → ϕ in first-order logic. So ϕ is in C.
On the other hand, if ϕ is in Q̄, then there is a proof of α → ¬ ϕ in first-
order logic. If T also proves α → ϕ, then T proves ¬α, in which case T ∪ Q
is inconsistent. But we are assuming T ∪ Q is consistent, so T does not prove
α → ϕ, and so ϕ is not in C.
We’ve shown that if ϕ is in Q, then it is in C, and if ϕ is in Q̄, then it is in C.
So C is a computable separation, which is the contradiction we were looking
for.

This theorem is very powerful. For example, it implies:

Corollary 33.12. First-order logic for the language of arithmetic (that is, the set
{ ϕ : ϕ is provable in first-order logic}) is undecidable.

Proof. First-order logic is the set of consequences of ∅, which is consistent


with Q.

33.10 Theories in which Q is Intepretable are Undecidable


We can strengthen these results even more. Informally, an interpretation of a
language L1 in another language L2 involves defining the universe, relation
symbols, and function symbols of L1 with formulas in L2 . Though we won’t
take the time to do this, one can make this definition precise.

Theorem 33.13. Suppose T is a theory in a language in which one can interpret the
language of arithmetic, in such a way that T is consistent with the interpretation of
Q. Then T is undecidable. If T proves the interpretation of the axioms of Q, then no
consistent extension of T is decidable.

Release: (None) ((None)) 471


CHAPTER 33. THEORIES AND COMPUTABILITY

The proof is just a small modification of the proof of the last theorem; one
could use a counterexample to get a separation of Q and Q̄. One can take ZFC,
Zermelo-Fraenkel set theory with the axiom of choice, to be an axiomatic foun-
dation that is powerful enough to carry out a good deal of ordinary mathemat-
ics. In ZFC one can define the natural numbers, and via this interpretation,
the axioms of Q are true. So we have

Corollary 33.14. There is no decidable extension of ZFC.

Corollary 33.15. There is no complete, consistent, computably axiomatizable exten-


sion of ZFC.

The language of ZFC has only a single binary relation, ∈. (In fact, you
don’t even need equality.) So we have

Corollary 33.16. First-order logic for any language with a binary relation symbol is
undecidable.

This result extends to any language with two unary function symbols,
since one can use these to simulate a binary relation symbol. The results just
cited are tight: it turns out that first-order logic for a language with only unary
relation symbols and at most one unary function symbol is decidable.
One more bit of trivia. We know that the set of sentences in the language
, 0 , +, ×, < true in the standard model is undecidable. In fact, one can de-
fine < in terms of the other symbols, and then one can define + in terms of
× and 0 . So the set of true sentences in the language , 0 , × is undecidable.
On the other hand, Presburger has shown that the set of sentences in the lan-
guage , 0 , + true in the language of arithmetic is decidable. The procedure is
computationally infeasible, however.

472 Release: (None) ((None))


Chapter 34

Incompleteness and Provability

34.1 Introduction
Hilbert thought that a system of axioms for a mathematical structure, such as
the natural numbers, is inadequate unless it allows one to derive all true state-
ments about the structure. Combined with his later interest in formal systems
of deduction, this suggests that he thought that we should guarantee that, say,
the formal systems we are using to reason about the natural numbers is not
only consistent, but also complete, i.e., every statement in its language is either
provable or its negation is. Gödel’s first incompleteness theorem shows that
no such system of axioms exists: there is no complete, consistent, axiomatiz-
able formal system for arithmetic. In fact, no “sufficiently strong,” consistent,
axiomatizable mathematical theory is complete.
A more important goal of Hilbert’s, the centerpiece of his program for the
justification of modern (“classical”) mathematics, was to find finitary consis-
tency proofs for formal systems representing classical reasoning. With regard
to Hilbert’s program, then, Gödel’s second incompleteness theorem was a
much bigger blow. The second incompleteness theorem can be stated in vague
terms, like the first incompleteness theorem. Roughly speaking, it says that no
sufficiently strong theory of arithmetic can prove its own consistency. We will
have to take “sufficiently strong” to include a little bit more than Q.
The idea behind Gödel’s original proof of the incompleteness theorem can
be found in the Epimenides paradox. Epimenides, a Cretan, asserted that all
Cretans are liars; a more direct form of the paradox is the assertion “this sen-
tence is false.” Essentially, by replacing truth with provability, Gödel was able
to formalize a sentence which, in a roundabout way, asserts that it itself is not
provable. If that sentence were provable, the theory would then be inconsis-
tent. Assuming ω-consistency—a property stronger than consistency—Gödel
was able to show that this sentence is also not refutable from the system of
axioms he was considering.
The first challenge is to understand how one can construct a sentence that

473
CHAPTER 34. INCOMPLETENESS AND PROVABILITY

refers to itself. For every formula ϕ in the language of Q, let pϕq denote the
numeral corresponding to # ϕ# . Think about what this means: ϕ is a formula in
the language of Q, # ϕ# is a natural number, and pϕq is a term in the language
of Q. So every formula ϕ in the language of Q has a name, pϕq, which is a
term in the language of Q; this provides us with a conceptual framework in
which formulas in the language of Q can “say” things about other formulas.
The following lemma is known as the fixed-point lemma.
Lemma 34.1. Let T be any theory extending Q, and let ψ( x ) be any formula with
only the variable x free. Then there is a sentence ϕ such that T proves ϕ ↔ ψ(pϕq).
The lemma asserts that given any property ψ( x ), there is a sentence ϕ that
asserts “ψ( x ) is true of me.”
How can we construct such a sentence? Consider the following version of
the Epimenides paradox, due to Quine:
“Yields falsehood when preceded by its quotation” yields false-
hood when preceded by its quotation.
This sentence is not directly self-referential. It simply makes an assertion
about the syntactic objects between quotes, and, in doing so, it is on par with
sentences like
1. “Robert” is a nice name.

2. “I ran.” is a short sentence.

3. “Has three words” has three words.


But what happens when one takes the phrase “yields falsehood when pre-
ceded by its quotation,” and precedes it with a quoted version of itself? Then
one has the original sentence! In short, the sentence asserts that it is false.

34.2 The Fixed-Point Lemma


The fixed-point lemma says that for any formula ψ( x ), there is a sentence ϕ
such that T ` ϕ ↔ ψ(pϕq), provided T extends Q. In the case of the liar sen-
tence, we’d want ϕ to be equivalent (provably in T) to “pϕq is false,” i.e., the
statement that # ϕ# is the Gödel number of a false sentence. To understand the
idea of the proof, it will be useful to compare it with Quine’s informal gloss
of ϕ as, “‘yields a falsehood when preceded by its own quotation’ yields a
falsehood when preceded by its own quotation.” The operation of taking an
expression, and then forming a sentence by preceding this expression by its
own quotation may be called diagonalizing the expression, and the result its
diagonalization. So, the diagonalization of ‘yields a falsehood when preceded
by its own quotation’ is “‘yields a falsehood when preceded by its own quo-
tation’ yields a falsehood when preceded by its own quotation.” Now note

474 Release: (None) ((None))


34.2. THE FIXED-POINT LEMMA

that Quine’s liar sentence is not the diagonalization of ‘yields a falsehood’ but
of ‘yields a falsehood when preceded by its own quotation.’ So the property
being diagonalized to yield the liar sentence itself involves diagonalization!
In the language of arithmetic, we form quotations of a formula with one
free variable by computing its Gödel numbers and then substituting the stan-
dard numeral for that Gödel number into the free variable. The diagonal-
ization of α( x ) is α(n), where n = # α( x )# . (From now on, let’s abbreviate
# α ( x )# as pα ( x )q.) So if ψ ( x ) is “is a falsehood,” then “yields a falsehood if

preceded by its own quotation,” would be “yields a falsehood when applied


to the Gödel number of its diagonalization.” If we had a symbol di ag for
the function diag(n) which computes the Gödel number of the diagonaliza-
tion of the formula with Gödel number n, we could write α( x ) as ψ(di ag ( x )).
And Quine’s version of the liar sentence would then be the diagonalization of
it, i.e., α(pαq) or ψ(di ag (pψ(di ag ( x ))q)). Of course, ψ( x ) could now be any
other property, and the same construction would work. For the incomplete-
ness theorem, we’ll take ψ( x ) to be “x is unprovable in T.” Then α( x ) would
be “yields a sentence unprovable in T when applied to the Gödel number of
its diagonalization.”
To formalize this in T, we have to find a way to formalize diag. The func-
tion diag(n) is computable, in fact, it is primitive recursive: if n is the Gödel
number of a formula α( x ), diag(n) returns the Gödel number of α(pα( x )q).
(Recall, pα( x )q is the standard numeral of the Gödel number of α( x ), i.e.,
# α ( x )# ). If di ag were a function symbol in T repqresenting the function diag,

we could take ϕ to be the formula ψ(di ag (pψ(di ag ( x ))q)). Notice that

diag(# ψ(di ag ( x ))# ) = # ψ(di ag (pψ(di ag ( x ))q)#


= # ϕ# .

Assuming T can prove

di ag (pψ(di ag ( x ))q) = pϕq,

it can prove ψ(di ag (pψ(di ag ( x ))q)) ↔ ψ(pϕq). But the left hand side is, by
definition, ϕ.
Of course, di ag will in general not be a function symbol of T, and cer-
tainly is not one of Q. But, since diag is computable, it is representable in Q
by some formula θdiag ( x, y). So instead of writing ψ(di ag ( x )) we can write
∃y (θdiag ( x, y) ∧ ψ(y)). Otherwise, the proof sketched above goes through,
and in fact, it goes through already in Q.

Lemma 34.2. Let ψ( x ) be any formula with one free variable x. Then there is a
sentence ϕ such that Q ` ϕ ↔ ψ(pϕq).

Proof. Given ψ( x ), let α( x ) be the formula ∃y (θdiag ( x, y) ∧ ψ(y)) and let ϕ be


its diagonalization, i.e., the formula α(pα( x )q).

Release: (None) ((None)) 475


CHAPTER 34. INCOMPLETENESS AND PROVABILITY

Since θdiag represents diag, and diag(# α( x )# ) = # ϕ# , Q can prove

θdiag (pα( x )q, pϕq) (34.1)


∀y (θdiag (pα( x )q, y) → y = pϕq). (34.2)
Now we show that Q ` ϕ ↔ ψ(pϕq). We argue informally, using just logic
and facts provable in Q.
First, suppose ϕ, i.e., α(pα( x )q). Going back to the definition of α( x ), we
see that α(pα( x )q) just is
∃y (θdiag (pα( x )q, y) ∧ ψ(y)).
Consider such a y. Since θdiag (pα( x )q, y), by ??, y = pϕq. So, from ψ(y) we
have ψ(pϕq).
Now suppose ψ(pϕq). By ??, we have θdiag (pα( x )q, pϕq) ∧ ψ(pϕq). It fol-
lows that ∃y (θdiag (pα( x )q, y) ∧ ψ(y)). But that’s just α(pαq), i.e., ϕ.

You should compare this to the proof of the fixed-point lemma in com-
putability theory. The difference is that here we want to define a statement in
terms of itself, whereas there we wanted to define a function in terms of itself;
this difference aside, it is really the same idea.

34.3 The First Incompleteness Theorem


We can now describe Gödel’s original proof of the first incompleteness theo-
rem. Let T be any computably axiomatized theory in a language extending
the language of arithmetic, such that T includes the axioms of Q. This means
that, in particular, T represents computable functions and relations.
We have argued that, given a reasonable coding of formulas and proofs
as numbers, the relation PrfT ( x, y) is computable, where PrfT ( x, y) holds if
and only if x is the Gödel number of a derivation of the formula with Gödel
number y in T. In fact, for the particular theory that Gödel had in mind, Gödel
was able to show that this relation is primitive recursive, using the list of 45
functions and relations in his paper. The 45th relation, xBy, is just PrfT ( x, y)
for his particular choice of T. Remember that where Gödel uses the word
“recursive” in his paper, we would now use the phrase “primitive recursive.”
Since PrfT ( x, y) is computable, it is representable in T. We will use Prf T ( x, y)
to refer to the formula that represents it. Let Prov T (y) be the formula ∃ x Prf T ( x, y).
This describes the 46th relation, Bew(y), on Gödel’s list. As Gödel notes, this
is the only relation that “cannot be asserted to be recursive.” What he proba-
bly meant is this: from the definition, it is not clear that it is computable; and
later developments, in fact, show that it isn’t.
Definition 34.3. A theory T is ω-consistent if the following holds: if ∃ x ϕ( x )
is any sentence and T proves ¬ ϕ(0), ¬ ϕ(1), ¬ ϕ(2), . . . then T does not prove
∃ x ϕ ( x ).

476 Release: (None) ((None))


34.4. ROSSER’S THEOREM

We can now prove the following.

Theorem 34.4. Let T be any ω-consistent, axiomatizable theory extending Q. Then


T is not complete.

Proof. Let T be an axiomatizable theory containing Q Then PrfT ( x, y) is de-


cidable, hence representable in Q by a formula Prf T ( x, y). Let Prov T (y) be the
formula we described above. By the fixed-point lemma, there is a formula γT
such that Q (and hence T) proves

γT ↔ ¬Prov T (pγT q). (34.3)

Note that ϕ says, in essence, “ϕ is not provable.”


We claim that

1. If T is consistent, T doesn’t prove γT

2. If T is ω-consistent, T doesn’t prove ¬γT .

This means that if T is ω-consistent, it is incomplete, since it proves neither γT


nor ¬γT . Let us take each claim in turn.
Suppose T proves γT . Then there is a derivation, and so, for some number
m, the relation PrfT (m, # γT # ) holds. But then Q proves the sentence Prf T (m, pγT q).
So Q proves ∃ x Prf T ( x, pγT q), which is, by definition, Prov T (pγT q). By ??, Q
proves ¬γT , and since T extends Q, so does T. We have shown that if T proves
γT , then it also proves ¬γT , and hence it would be inconsistent.
For the second claim, let us show that if T proves ¬γT , then it is ω-inconsistent.
Suppose T proves ¬γT . If T is inconsistent, it is ω-inconsistent, and we are
done. Otherwise, T is consistent, so it does not prove γT . Since there is no
proof of γT in T, Q proves

¬Prf T (0, pγT q), ¬Prf T (1, pγT q), ¬Prf T (2, pγT q), . . .

and so does T. On the other hand, by ??, ¬γT is equivalent to ∃ x Prf T ( x, pγT q).
So T is ω-inconsistent.

34.4 Rosser’s Theorem


Can we modify Gödel’s proof to get a stronger result, replacing “ω-consistent”
with simply “consistent”? The answer is “yes,” using a trick discovered by
Rosser. Rosser’s trick is to use a “modified” provability predicate RProv T (y)
instead of Prov T (y).

Theorem 34.5. Let T be any consistent, axiomatizable theory extending Q. Then T


is not complete.

Release: (None) ((None)) 477


CHAPTER 34. INCOMPLETENESS AND PROVABILITY

Proof. Recall that Prov T (y) is defined as ∃ x Prf T ( x, y), where Prf T ( x, y) repre-
sents the decidable relation which holds iff x is the Gödel number of a deriva-
tion of the sentence with Gödel number y. The relation that holds between x
and y if x is the Gödel number of a refutation of the sentence with Gödel num-
ber y is also decidable. Let not( x ) be the primitive recursive function which
does the following: if x is the code of a formula ϕ, not( x ) is a code of ¬ ϕ.
Then RefT ( x, y) holds iff PrfT ( x, not(y)). Let Ref T ( x, y) represent it. Then, if
T ` ¬ ϕ and δ is a corresponding derivation, Q ` Ref T (pδq, pϕq). We define
RProv T (y) as

∃ x (Prf T ( x, y) ∧ ∀z (z < x → ¬Ref T (z, y))).

Roughly, RProv T (y) says “there is a proof of y in T, and there is no shorter


refutation of y.” (You might find it convenient to read RProv T (y) as “y is
shmovable.”) Assuming T is consistent, RProv T (y) is true of the same num-
bers as Prov T (y); but from the point of view of provability in T (and we now
know that there is a difference between truth and provability!) the two have
different properties. (If T is inconsistent, then the two do not hold of the same
numbers!)
By the fixed-point lemma, there is a formula ρT such that

Q ` ρT ↔ ¬RProv T (pρT q). (34.4)

In contrast to the proof of ??, here we claim that if T is consistent, T doesn’t


prove ρT , and T also doesn’t prove ¬ρT . (In other words, we don’t need the
assumption of ω-consistency.)
First, let’s show that T 0 ρ T . Suppose it did, so there is a derivation of ρ T
from T; let n be its Gödel number. Then Q ` Prf T (n, pρ T q), since Prf T rep-
resents PrfT in Q. Also, for each k < n, k is not the Gödel number of ¬ρ T ,
since T is consistent. So for each k < n, Q ` ¬Ref T (k, pρ T q). By ??(2),
Q ` ∀z (z < n → ¬Ref T (z, pρ T q)). Thus,

Q ` ∃ x (Prf T ( x, pρ T q) ∧ ∀z (z < x → ¬Ref T (z, pρ T q))),

but that’s just RProv T (pρ T q). By ??, Q ` ¬ρ T . Since T extends Q, also T `
¬ρ T . We’ve assumed that T ` ρ T , so T would be inconsistent, contrary to the
assumption of the theorem.
Now, let’s show that T 0 ¬ρ T . Again, suppose it did, and suppose n
is the Gödel number of a derivation of ¬ρ T . Then RefT (n, # ρ T # ) holds, and
since Ref T represents RefT in Q, Q ` Ref T (n, pρ T q). We’ll again show that
T would then be inconsistent because it would also prove ρ T . Since Q `
ρ T ↔ ¬RProv T (pρ T q), and since T extends Q, it suffices to show that Q `
¬RProv T (pρ T q). The sentence ¬RProv T (pρ T q), i.e.,

¬∃ x (Prf T ( x, pρ T q) ∧ ∀z (z < x → ¬Ref T (z, pρ T q)))

478 Release: (None) ((None))


34.5. COMPARISON WITH GÖDEL’S ORIGINAL PAPER

is logically equivalent to

∀ x (Prf T ( x, pρ T q) → ∃z (z < x ∧ Ref T (z, pρ T q)))

We argue informally using logic, making use of facts about what Q proves.
Suppose x is arbitrary and Prf T ( x, pρ T q). We already know that T 0 ρ T , and
so for every k, Q ` ¬Prf T (k, pρ T q). Thus, for every k it follows that x 6= k. In
particular, we have (a) that x 6= n. We also have ¬( x = 0 ∨ x = 1 ∨ · · · ∨ x =
n − 1) and so by ??(2), (b) ¬( x < n). By ??, n < x. Since Q ` Ref T (n, pρ T q), we
have n < x ∧ Ref T (n, pρ T q), and from that ∃z (z < x ∧ Ref T (z, pρ T q)). Since x
was arbitrary we get

∀ x (Prf T ( x, pρ T q) → ∃z (z < x ∧ Ref T (z, pρ T q)))

as required.

34.5 Comparison with Gödel’s Original Paper


It is worthwhile to spend some time with Gödel’s 1931 paper. The introduc-
tion sketches the ideas we have just discussed. Even if you just skim through
the paper, it is easy to see what is going on at each stage: first Gödel describes
the formal system P (syntax, axioms, proof rules); then he defines the prim-
itive recursive functions and relations; then he shows that xBy is primitive
recursive, and argues that the primitive recursive functions and relations are
represented in P. He then goes on to prove the incompleteness theorem, as
above. In section 3, he shows that one can take the unprovable assertion to
be a sentence in the language of arithmetic. This is the origin of the β-lemma,
which is what we also used to handle sequences in showing that the recursive
functions are representable in Q. Gödel doesn’t go so far to isolate a minimal
set of axioms that suffice, but we now know that Q will do the trick. Finally,
in Section 4, he sketches a proof of the second incompleteness theorem.

34.6 The Provability Conditions for PA


Peano arithmetic, or PA, is the theory extending Q with induction axioms for
all formulas. In other words, one adds to Q axioms of the form

( ϕ(0) ∧ ∀ x ( ϕ( x ) → ϕ( x 0 ))) → ∀ x ϕ( x )

for every formula ϕ. Notice that this is really a schema, which is to say, in-
finitely many axioms (and it turns out that PA is not finitely axiomatizable).
But since one can effectively determine whether or not a string of symbols is
an instance of an induction axiom, the set of axioms for PA is computable. PA
is a much more robust theory than Q. For example, one can easily prove that
addition and multiplication are commutative, using induction in the usual

Release: (None) ((None)) 479


CHAPTER 34. INCOMPLETENESS AND PROVABILITY

way. In fact, most finitary number-theoretic and combinatorial arguments can


be carried out in PA.
Since PA is computably axiomatized, the provability predicate PrfPA ( x, y)
is computable and hence represented in Q (and so, in PA). As before, I will
take Prf PA ( x, y) to denote the formula representing the relation. Let ProvPA (y)
be the formula ∃ x PrfPA ( x, y), which, intuitively says, “y is provable from the
axioms of PA.” The reason we need a little bit more than the axioms of Q is
we need to know that the theory we are using is strong enough to prove a
few basic facts about this provability predicate. In fact, what we need are the
following facts:

P1. If PA ` ϕ, then PA ` ProvPA (pϕq)

P2. For all formulas ϕ and ψ,

PA ` ProvPA (pϕ → ψq) → (ProvPA (pϕq) → ProvPA (pψq))

P3. For every formula ϕ,

PA ` ProvPA (pϕq) → ProvPA (pProvPA (pϕq)q).

The only way to verify that these three properties hold is to describe the for-
mula ProvPA (y) carefully and use the axioms of PA to describe the relevant
formal proofs. Conditions (1) and (2) are easy; it is really condition (3) that
requires work. (Think about what kind of work it entails. . . ) Carrying out the
details would be tedious and uninteresting, so here we will ask you to take it
on faith that PA has the three properties listed above. A reasonable choice of
ProvPA (y) will also satisfy

P4. If PA ` ProvPA (pϕq), then PA ` ϕ.

But we will not need this fact.


Incidentally, Gödel was lazy in the same way we are being now. At the
end of the 1931 paper, he sketches the proof of the second incompleteness
theorem, and promises the details in a later paper. He never got around to it;
since everyone who understood the argument believed that it could be carried
out (he did not need to fill in the details.)

34.7 The Second Incompleteness Theorem


How can we express the assertion that PA doesn’t prove its own consistency?
Saying PA is inconsistent amounts to saying that PA proves 0 = 1. So we
can take ConPA to be the formula ¬ProvPA (p0 = 1q), and then the following
theorem does the job:

Theorem 34.6. Assuming PA is consistent, then PA does not prove ConPA .

480 Release: (None) ((None))


34.7. THE SECOND INCOMPLETENESS THEOREM

It is important to note that the theorem depends on the particular repre-


sentation of ConPA (i.e., the particular representation of ProvPA (y)). All we
will use is that the representation of ProvPA (y) has the three properties above,
so the theorem generalizes to any theory with a provability predicate having
these properties.

It is informative to read Gödel’s sketch of an argument, since the theorem


follows like a good punch line. It goes like this. Let γPA be the Gödel sentence
that we constructed in the proof of ??. We have shown “If PA is consistent,
then PA does not prove γPA .” If we formalize this in PA, we have a proof of

ConPA → ¬ProvPA (pγPA q).

Now suppose PA proves ConPA . Then it proves ¬ProvPA (pγPA q). But since
γPA is a Gödel sentence, this is equivalent to γPA . So PA proves γPA .

But: we know that if PA is consistent, it doesn’t prove γPA ! So if PA is


consistent, it can’t prove ConPA .

To make the argument more precise, we will let γPA be the Gödel sentence
for PA and use the provability conditions (1)–(3) above to show that PA proves
ConPA → γPA . This will show that PA doesn’t prove ConPA . Here is a sketch

Release: (None) ((None)) 481


CHAPTER 34. INCOMPLETENESS AND PROVABILITY

of the proof, in PA. (For simplicity, we drop the PA subscripts.)

γ ↔ ¬Prov(pγq) (34.5)
γ is a Gödel sentence
γ → ¬Prov(pγq) (34.6)
from ??
γ → (Prov(pγq) → ⊥) (34.7)
from ?? by logic
Prov(pγ → (Prov(pγq) → ⊥)q) (34.8)
by from ?? by condition P1
Prov(pγq) → Prov(p(Prov(pγq) → ⊥)q) (34.9)
from ?? by condition P2
Prov(pγq) → (Prov(pProv(pγq)q) → Prov(p⊥q)) (34.10)
from ?? by condition P2 and logic
Prov(pγq) → Prov(pProv(pγq)q) (34.11)
by P3
Prov(pγq) → Prov(p⊥q) (34.12)
from ?? and ?? by logic
Con → ¬Prov(pγq) (34.13)
contraposition of ?? and Con ≡ ¬Prov(p⊥q)
Con → γ
from ?? and ?? by logic

The use of logic in the above just elementary facts from propositional logic,
e.g., ?? uses ` ¬ ϕ ↔ ( ϕ → ⊥) and ?? uses ϕ → (ψ → χ), ϕ → ψ ` ϕ → χ. The
use of condition P2 in ?? and ?? relies on instances of P2, Prov(pϕ → ψq) →
(Prov(pϕq) → Prov(pψq)). In the first one, ϕ ≡ γ and ψ ≡ Prov(pγq) → ⊥; in
the second, ϕ ≡ Prov(pGq) and ψ ≡ ⊥.
The more abstract version of the incompleteness theorem is as follows:

Theorem 34.7. Let T be any axiomatized theory extending Q and let Prov T (y) be
any formula satisfying provability conditions P1–P3 for T. Then if T is consistent,
then T does not prove ConT .

The moral of the story is that no “reasonable” consistent theory for math-
ematics can prove its own consistency. Suppose T is a theory of mathematics
that includes Q and Hilbert’s “finitary” reasoning (whatever that may be).
Then, the whole of T cannot prove the consistency of T, and so, a fortiori, the
finitary fragment can’t prove the consistency of T either. In that sense, there
cannot be a finitary consistency proof for “all of mathematics.”

482 Release: (None) ((None))


34.8. LÖB’S THEOREM

There is some leeway in interpreting the term “finitary,” and Gödel, in the
1931 paper, grants the possibility that something we may consider “finitary”
may lie outside the kinds of mathematics Hilbert wanted to formalize. But
Gödel was being charitable; today, it is hard to see how we might find some-
thing that can reasonably be called finitary but is not formalizable in, say,
ZFC.

34.8 Löb’s Theorem


The Gödel sentence for a theory T is a fixed point of ¬Prov T ( x ), i.e., a sen-
tence γ such that
T ` ¬Prov T (pγq) ↔ γ.
It is not provable, because if T ` γ, (a) by provability condition (1), T `
Prov T (pγq), and (b) T ` γ together with T ` ¬Prov T (pγq) ↔ γ gives T `
¬Prov T (pγq), and so T would be inconsistent. Now it is natural to ask about
the status of a fixed point of Prov T ( x ), i.e., a sentence δ such that

T ` Prov T (pδq) ↔ δ.

If it were provable, T ` Prov T (pδq) by condition (1), but the same conclusion
follows if we apply modus ponens to the equivalence above. Hence, we don’t
get that T is inconsistent, at least not by the same argument as in the case of
the Gödel sentence. This of course does not show that T does prove δ.
We can make headway on this question if we generalize it a bit. The left-to-
right direction of the fixed point equivalence, Prov T (pδq) → δ, is an instance of
a general schema called a reflection principle: Prov T (pϕq) → ϕ. It is called that
because it expresses, in a sense, that T can “reflect” about what it can prove;
basically it says, “If T can prove ϕ, then ϕ is true,” for any ϕ. This is true for
sound theories only, of course, and this suggests that theories will in general
not prove every instance of it. So which instances can a theory (strong enough,
and satisfying the provability conditions) prove? Certainly all those where ϕ
itself is provable. And that’s it, as the next result shows.

Theorem 34.8. Let T be an axiomatizable theory extending Q, and suppose Prov T (y)
is a formula satisfying conditions P1–P3 from ??. If T proves Prov T (pϕq) → ϕ, then
in fact T proves ϕ.

Put differently, if T 0 ϕ, then T 0 Prov T (pϕq) → ϕ. This result is known as


Löb’s theorem.
The heuristic for the proof of Löb’s theorem is a clever proof that Santa
Claus exists. (If you don’t like that conclusion, you are free to substitute any
other conclusion you would like.) Here it is:

1. Let X be the sentence, “If X is true, then Santa Claus exists.”

Release: (None) ((None)) 483


CHAPTER 34. INCOMPLETENESS AND PROVABILITY

2. Suppose X is true.

3. Then what it says holds; i.e., we have: if X is true, then Santa Claus
exists.

4. Since we are assuming X is true, we can conclude that Santa Claus exists,
by modus ponens from (2) and (3).

5. We have succeeded in deriving (4), “Santa Claus exists,” from the as-
sumption (2), “X is true.” By conditional proof, we have shown: “If X is
true, then Santa Claus exists.”

6. But this is just the sentence X. So we have shown that X is true.

7. But then, by the argument (2)–(4) above, Santa Claus exists.

A formalization of this idea, replacing “is true” with “is provable,” and “Santa
Claus exists” with ϕ, yields the proof of Löb’s theorem. The trick is to apply
the fixed-point lemma to the formula Prov T (y) → ϕ. The fixed point of that
corresponds to the sentence X in the preceding sketch.

Proof. Suppose ϕ is a sentence such that T proves Prov T (pϕq) → ϕ. Let ψ(y) be
the formula Prov T (y) → ϕ, and use the fixed-point lemma to find a sentence θ

484 Release: (None) ((None))


34.8. LÖB’S THEOREM

such that T proves θ ↔ ψ(pθq). Then each of the following is provable in T:

θ ↔ (Prov T (pθq) → ϕ) (34.14)


θ is a fixed point of ψ(y)
θ → (Prov T (pθq) → ϕ) (34.15)
from ??
Prov T (pθ → (Prov T (pθq) → ϕ)q) (34.16)
from ?? by condition P1
Prov T (pθq) → Prov T (pProv T (pθq) → ϕq) (34.17)
from ?? using condition P2
Prov T (pθq) → (Prov T (pProv T (pθq)q) → Prov T (pϕq)) (34.18)
from ?? using P2 again
Prov T (pθq) → Prov T (pProv T (pθq)q) (34.19)
by provability condition P3
Prov T (pθq) → Prov T (pϕq) (34.20)
from ?? and ??
Prov T (pϕq) → ϕ (34.21)
by assumption of the theorem
Prov T (pθq) → ϕ (34.22)
from ?? and ??
(Prov T (pθq) → ϕ) → θ (34.23)
from ??
θ (34.24)
from ?? and ??
Prov T (pθq) (34.25)
from ?? by condition P1
ϕ from ?? and ??

With Löb’s theorem in hand, there is a short proof of the first incomplete-
ness theorem (for theories having a provability predicate satisfying conditions
P1–P3: if T ` Prov T (p⊥q) → ⊥, then T ` ⊥. If T is consistent, T 0 ⊥. So,
T 0 Prov T (p⊥q) → ⊥, i.e., T 0 ConT . We can also apply it to show that δ, the
fixed point of Prov T ( x ), is provable. For since

T ` Prov T (pδq) ↔ δ

Release: (None) ((None)) 485


CHAPTER 34. INCOMPLETENESS AND PROVABILITY

in particular

T ` Prov T (pδq) → δ

and so by Löb’s theorem, T ` δ.

34.9 The Undefinability of Truth


The notion of definability depends on having a formal semantics for the lan-
guage of arithmetic. We have described a set of formulas and sentences in
the language of arithmetic. The “intended interpretation” is to read such sen-
tences as making assertions about the natural numbers, and such an assertion
can be true or false. Let N be the structure with domain N and the standard in-
terpretation for the symbols in the language of arithmetic. Then N  ϕ means
“ϕ is true in the standard interpretation.”

Definition 34.9. A relation R( x1 , . . . , xk ) of natural numbers is definable in N if


and only if there is a formula ϕ( x1 , . . . , xk ) in the language of arithmetic such
that for every n1 , . . . , nk , R(n1 , . . . , nk ) if and only if N  ϕ(n1 , . . . , nk ).

Put differently, a relation is definable in in N if and only if it is repre-


sentable in the theory TA, where TA = { ϕ : N  ϕ} is the set of true sentences
of arithmetic. (If this is not immediately clear to you, you should go back and
check the definitions and convince yourself that this is the case.)

Lemma 34.10. Every computable relation is definable in N.

Proof. It is easy to check that the formula representing a relation in Q defines


the same relation in N.

Now one can ask, is the converse also true? That is, is every relation defin-
able in N computable? The answer is no. For example:

Lemma 34.11. The halting relation is definable in N.

Proof. Let H be the halting relation, i.e.,

H = {he, x i : ∃s T (e, x, s)}.

Let θ T define T in N. Then

H = {he, x i : N  ∃s θ T (e, x, s)},

so ∃s θ T (z, x, s) defines H in N.

What about TA itself? Is it definable in arithmetic? That is: is the set


{# ϕ# : N  ϕ} definable in arithmetic? Tarski’s theorem answers this in the
negative.

486 Release: (None) ((None))


34.9. THE UNDEFINABILITY OF TRUTH

Theorem 34.12. The set of true statements of arithmetic is not definable in arith-
metic.

Proof. Suppose θ ( x ) defined it. By the fixed-point lemma, there is a formula


ϕ such that Q proves ϕ ↔ ¬θ (pϕq), and hence N  ϕ ↔ ¬θ (pϕq). But then
N  ϕ if and only if N  ¬θ (pϕq), which contradicts the fact that θ (y) is
supposed to define the set of true statements of arithmetic.

Tarski applied this analysis to a more general philosophical notion of truth.


Given any language L, Tarski argued that an adequate notion of truth for L
would have to satisfy, for each sentence X,

‘X’ is true if and only if X.

Tarski’s oft-quoted example, for English, is the sentence

‘Snow is white’ is true if and only if snow is white.

However, for any language strong enough to represent the diagonal function,
and any linguistic predicate T ( x ), we can construct a sentence X satisfying
“X if and only if not T (‘X’).” Given that we do not want a truth predicate
to declare some sentences to be both true and false, Tarski concluded that
one cannot specify a truth predicate for all sentences in a language without,
somehow, stepping outside the bounds of the language. In other words, a the
truth predicate for a language cannot be defined in the language itself.

Problems
Problem 34.1. Show that PA proves γPA → ConPA .

Problem 34.2. Let T be a computably axiomatized theory, and let Prov T be a


provability predicate for T. Consider the following four statements:

1. If T ` ϕ, then T ` Prov T (pϕq).

2. T ` ϕ → Prov T (pϕq).

3. If T ` Prov T (pϕq), then T ` ϕ.

4. T ` Prov T (pϕq) → ϕ

Under what conditions are each of these statements true?

Problem 34.3. Show that Q(n) ⇔ n ∈ {# ϕ# : Q ` ϕ} is definable in arithmetic.

Release: (None) ((None)) 487


Part VIII

Second-order Logic

488
34.9. THE UNDEFINABILITY OF TRUTH

This is the beginnings of a part on second-order logic.

Release: (None) ((None)) 489


Chapter 35

Syntax and Semantics

Basic syntax and semantics for SOL covered so far. As a chapter it’s
too short. Substitution for second-order variables has to be covered to
be able to talk about derivation systems for SOL, and there’s some subtle
issues there.

35.1 Introduction
In first-order logic, we combine the non-logical symbols of a given language,
i.e., its constant symbols, function symbols, and predicate symbols, with the
logical symbols to express things about first-order structures. This is done
using the notion of satisfaction, which relates a structure M, together with a
variable assignment s, and a formula ϕ: M, s  ϕ holds iff what ϕ expresses
when its constant symbols, function symbols, and predicate symbols are in-
terpreted as M says, and its free variables are interpreted as s says, is true.
The interpretation of the identity predicate = is built into the definition of
M, s  ϕ, as is the interpretation of ∀ and ∃. The former is always interpreted
as the identity relation on the domain |M| of the structure, and the quanti-
fiers are always interpreted as ranging over the entire domain. But, crucially,
quantification is only allowed over elements of the domain, and so only object
variables are allowed to follow a quantifier.
In second-order logic, both the language and the definition of satisfaction
are extended to include free and bound function and predicate variables, and
quantification over them. These variables are related to function symbols and
predicate symbols the same way that object variables are related to constant
symbols. They play the same role in the formation of terms and formulas
of second-order logic, and quantification over them is handled in a similar
way. In the standard semantics, the second-order quantifiers range over all
possible objects of the right type (n-place functions from |M| to |M| for func-

490
35.2. TERMS AND FORMULAS

tion variables, n-place relations for predicate variables). For instance, while
∀v (P (v0 ) ∨ ¬P (v0 )) is a formula in both first- and second-order logic, in
the latter we can also consider ∀V ∀v (V (v0 ) ∨ ¬V (v0 )) and ∃V ∀v (V (v0 ) ∨
¬V (v0 )). Since these contain no free varaibles, they are sentences of second-
order logic. Here, V is a second-order 1-place predicate variable. The allow-
able interpretations of V are the same that we can assign to a 1-place predicate
symbol like P , i.e., subsets of |M|. Quantification over them then amounts
to saying that ∀v (V (v0 ) ∨ ¬V (v0 )) holds for all ways of assigning a subset
of |M| as the value of V , or for at least one. Since every set either contains or
fails to contain a given object, both are true in any structure.

35.2 Terms and Formulas


Like in first-order logic, expressions of second-order logic are built up from
a basic vocabulary containing variables, constant symbols, predicate symbols and
sometimes function symbols. From them, together with logical connectives,
quantifiers, and punctuation symbols such as parentheses and commas, terms
and formulas are formed. The difference is that in addition to variables for
objects, second-order logic also contains variables for relations and functions,
and allows quantification over them. So the logical symbols of second-order
logic are those of first-order logic, plus:

1. A denumerable set of second-order relation variables of every arity n:


V0n , V1n , V2n , . . .

2. A denumerable set of second-order function variables: u0n , u1n , u2n , . . .

Just as we use x, y, z as meta-variables for first-order variables vi , we’ll use


X, Y, Z, etc., as metavariables for Vin and u, v, etc., as meta-variables for uin .
The non-logical symbols of a second-order language are specified the same
way a first-order language is: by listing its constant symbols, function sym-
bols, and predicate symbols
In first-order logic, the identity predicate = is usually included. In first-
order logic, the non-logical symbols of a language L are crucial to allow us to
express anything interesting. There are of course sentences that use no non-
logical symbols, but with only = it is hard to say anything interesting. In
second-order logic, since we have an unlimited supply of relation and func-
tion variables, we can say anything we can say in a first-order language even
without a special supply of non-logical symbols.

Definition 35.1 (Second-order Terms). The set of second-order terms of L, Trm2 (L),
is defined by adding to ?? the clause

1. If u is an n-place function variable and t1 , . . . , tn are terms, then u(t1 , . . . , tn )


is a term.

Release: (None) ((None)) 491


CHAPTER 35. SYNTAX AND SEMANTICS

So, a second-order term looks just like a first-order term, except that where
a first-order term contains a function symbol fi n , a second-order term may
contain a function variable uin in its place.

Definition 35.2 (Second-order formula). The set of second-order formulas Frm2 (L)
of the language L is defined by adding to ?? the clauses

1. If X is an n-place predicate variable and t1 , . . . , tn are second-order terms


of L, then X (t1 , . . . , tn ) is an atomic formula.

2. If ϕ is a formula and u is a function variable, then ∀u ϕ is a formula.

3. If ϕ is a formula and X is a predicate variable, then ∀ X ϕ is a formula.

4. If ϕ is a formula and u is a function variable, then ∃u ϕ is a formula.

5. If ϕ is a formula and X is a predicate variable, then ∃ X ϕ is a formula.

35.3 Satisfaction
To define the satisfaction relation M, s  ϕ for second-order formulas, we have
to extend the definitions to cover second-order variables.

Definition 35.3 (Variable Assignment). A variable assignment s for a struc-


ture M is a function which maps each

1. object variable vi to an element of |M|, i.e., s(vi ) ∈ |M|

2. n-place relation variable Vi n to an n-place relation on |M|, i.e., s(Vi n ) ⊆


|M| n ;
3. n-place function variable uin to an n-place function from |M| to |M|, i.e.,
s(uin ) : |M|n → |M|;

A structure assigns a value to each constant symbol and function symbol,


and a second-order variable assigns objects and functions to each object and
function variable. Together, they let us assign a value to very term.

Definition 35.4 (Value of a Term). If t is a term of the language L, M is a


structure for L, and s is a variable assignment for M, the value ValM
s ( t ) is
defined as for first-order terms, plus the following clause:

t ≡ u ( t1 , . . . , t n ):

ValM M M
s ( t ) = s ( u )(Vals ( t1 ), . . . , Vals ( tn )).

Definition 35.5 (Satisfaction). For second-order formulas ϕ, the definition of


satisfaction is like ?? with the addition of:

492 Release: (None) ((None))


35.4. SEMANTIC NOTIONS

1. ϕ ≡ X n t1 , . . . , tn : M, s  ϕ iff hValM M n
s ( t1 ), . . . , Vals ( tn )i ∈ s ( X ).

2. ϕ ≡ ∀ X ψ: M, s  ϕ iff for every X-variant s0 of s, M, s0  ψ.

3. ϕ ≡ ∃ X ψ: M, s  ϕ iff there is an X-variant s0 of s so that M, s0  ψ.

4. ϕ ≡ ∀u ψ: M, s  ϕ iff for every u-variant s0 of s, M, s0  ψ.

5. ϕ ≡ ∃u ψ: M, s  ϕ iff there is an u-variant s0 of s so that M, s0  ψ.

Example 35.6. M, s  ∀z ( Xz ↔ ¬Yz) whenever s(Y ) = |M| \ s( X ). So for


instance, let |M| = {1, 2, 3}, s( X ) = {1, 2} and s(Y ) = {3}.
M, s  ∃Y (∃y Yy ∧ ∀z ( Xz ↔ ¬Yz)) if there is an s0 ∼Y s such that M, s 
(∃y Yy ∧ ∀z ( Xz ↔ ¬Yz)). And that is the case iff s0 (Y ) 6= ∅ (so that M, s0 
∃y Yy) and, as before, s0 (Y ) = |M| \ s0 ( X ). In other words, M, s  ∃Y (∃y Yy ∧
∀z ( Xz ↔ ¬Yz)) iff |M| \ s( X ) is non-empty, or, s( X ) 6= |M|. So, the formula
is satisfied, e.g., if s( X ) = {1, 2} but not if s( X ) = {1, 2, 3}.

35.4 Semantic Notions


The central logical notions of validity, entailment, and satisfiability are defined
the same way for second-order logic as they are for first-order logic, except
that the underlying satisfaction relation is now that for second-order formu-
las. A second-order sentence, of course, is a formula in which all variables,
including predicate and function variables, are bound.

Definition 35.7 (Validity). A sentence ϕ is valid,  ϕ, iff M  ϕ for every


structure M.

Definition 35.8 (Entailment). A set of sentences Γ entails a sentence ϕ, Γ  ϕ,


iff for every structure M with M  Γ, M  ϕ.

Definition 35.9 (Satisfiability). A set of sentences Γ is satisfiable if M  Γ for


some structure M. If Γ is not satisfiable it is called unsatisfiable.

35.5 Expressive Power


Quantification over second-order variables is responsible for an immense in-
crease in the expressive power of the language over that of first-order logic.
Second-order existential quantification lets us say that functions or relations
with certain properties exists. In first-order logic, the only way to do that is
to specify non-logical symbol (i.e., a function symbol or predicate symbol) for
this purpose. Second-order universal quantification lets us say that all subsets
of, relations on, or functions from the domain to the domain have a property.
In first-order logic, we can only say that the subsets, relations, or functions
assigned to one of the non-logical symbols of the language have a property.

Release: (None) ((None)) 493


CHAPTER 35. SYNTAX AND SEMANTICS

And when we say that subsets, relations, functions exist that have a property,
or that all of them have it, we can use second-order quantification in speci-
fying this property as well. This lets us define relations not definable in first-
order logic, and express properties of the domain not expressible in first-order
logic.

Example 35.10. If M is a structure for a language L, a relation R ⊆ |M|2 is


definable in L if there is some formula ϕ R (v , v ) with only the variables v0
and v1 free, such that R( x, y) holds (i.e., h x, yi ∈ R) iff M, s  ϕ R (v , v ) for
s(v ) = x and s(v ) = y. For instance, in first-order logic we can define the
identity relation Id|M| (i.e., {h x, x i : x ∈ |M|}) by the formula v0 = v1 . In
second-order logic, we can define this relation without =. For if x and y are the
same element of |M|, then they are elements of the same subsets of |M| (since
sets are determined by their elements). Conversely, if x and y are different,
then they are not elements of the same subsets: e.g., x ∈ { x } but y ∈ / { x } if
x 6= y. So “being elements of the same subsets of |M|” is a relation that holds
of x and y iff x = y. It is a relation that can be expressed in second-order logic,
since we can quantify over all subsets of |M|. Hence, the following formula
defines Id|M| :
∀ X ( X (v ) ↔ X (v ))
Example 35.11. If R is a two-place predicate symbol, RM is a two-place rela-
tion on |M|. Its transitive closure R∗ is the relation that holds between x and
y if for some z1 , . . . , zk , R( x, z1 ), R(z1 , z2 ), . . . , R(zk , y) holds. This includes
the case if k = 0, i.e., if R( x, y) holds. This means that R ⊆ R∗ . In fact, R∗
is the smallest relation that includes R and that is transitive. We can say in
second-order logic that X is a transitive relation that includes R:

ψR ( X ) ≡ ∀ x ∀y ( R( x, y) → X ( x, y)) ∧
∀ x ∀y ∀z (( X ( x, y) ∧ X (y, z)) → X ( x, z))
Here, somewhat confusingly, we use R as the predicate symbol for R. The first
conjunct says that R ⊆ X and the second that X is transitive.
To say that X is the smallest such relation is to say that it is itself included in
every relation that includes R and is transitive. So we can define the transitive
closure of R by the formula
R∗ ( X ) ≡ ψR ( X ) ∧ ∀Y (ψR (Y ) → ∀ x ∀y ( X ( x, y) → Y ( x, y)))
M, s  R∗ ( X ) iff s( X ) = R∗ . The transitive closure of R cannot be expressed
in first-order logic.

35.6 Describing Infinite and Enumerable Domains


A set M is (Dedekind) infinite iff there is an injective function f : M → M
which is not surjective, i.e., with dom( f ) 6= M. In first-order logic, we can

494 Release: (None) ((None))


35.6. DESCRIBING INFINITE AND ENUMERABLE DOMAINS

consider a one-place function symbol f and say that the function f M assigned
to it in a structure M is injective and ran( f ) 6= |M|:

∀ x ∀y ( f ( x ) = f (y) → x = y) ∧ ∃y ∀ x y 6= f ( x )

If M satisfies this sentence, f M : |M| → |M| is injective, and so |M| must


be infinite. If |M| is infinite, and hence such a function exists, we can let f M
be that function and M will satisfy the sentence. However, this requires that
our language contains the non-logical symbol f we use for this purpose. In
second-order logic, we can simply say that such a function exists. This no-
longer requires f , and we have the sentence in pure second-order logic

Inf ≡ ∃u (∀ x ∀y (u( x ) = u(y) → x = y) ∧ ∃y ∀ x y 6= u( x ))

M  Inf iff |M| is infinite. We can then define Fin ≡ ¬Inf; M  Fin iff |M| is
finite. No single sentence of pure first-order logic can express that the domain
is infinite although an infinite set of them can. There is no set of sentences of
pure first-order logic that is satisfied in a structure iff its domain is finite.

Proposition 35.12. M  Inf iff |M| is infinite.

Proof. M  Inf iff M, s  ∀ x ∀y (u( x ) = u(y) → x = y) ∧ ∃y ∀ x y 6= u( x ) for


some s. If it does, s(u) is an injective function, and some y ∈ |M| is not in
the domain of s(u). Conversely, if there is an injective f : |M| → |M| with
dom( f ) 6= |M|, then s(u) = f is such a variable assignment.

A set M is enumerable if there is an enumeration

m0 , m1 , m2 , . . .

of its elements (without repetitions). Such an enumeration exists iff there is


an element z ∈ M and a function f : M → M such that z, f (z), f ( f (z)) are all
the elements of M. For if the enumeration exists, z = m0 and f (mk ) = mk+1
(or f (mk ) = mk if mk is the last element of the enumeration) are the requisite
element and function. On the other hand, if such a z and f exist, then z, f (z),
f ( f (z)), . . . , is an enumeration of M, and M is enumerable. We can express
the existence of z and f in second-order logic to produce a sentence true in
a structure iff the structure is enumerable:

Count ≡ ∃z ∃u ∀ X (( X (z) ∧ ∀ x ( X ( x ) → X (u( x )))) → ∀ x X ( x ))

Proposition 35.13. M  Count iff |M| is enumerable.

Proof. Suppose |M| is enumerable, and let m0 , m1 , . . . , be an enumeration.


By removing repetions we can guarantee that no mk appears twice. Define
f (mk ) = mk+1 and let s(z) = m0 and s(u) = f . We show that

M, s  ∀ X (( X (z) ∧ ∀ x ( X ( x ) → X (u( x )))) → ∀ x X ( x ))

Release: (None) ((None)) 495


CHAPTER 35. SYNTAX AND SEMANTICS

Suppose s0 ∼ X s, and M = s0 ( X ). Suppose further that M, s0  ( X (z) ∧


∀ x ( X ( x ) → X (u( x )))). Then s0 (z) ∈ M and whenever x ∈ M, also s0 (u)( x ) ∈
M. In other words, since s0 ∼ X s, m0 ∈ M and if x ∈ M then f ( x ) ∈ M, so
m0 ∈ M, m1 = f (m0 ) ∈ M, m2 = f ( f (m0 )) ∈ M, etc. Thus, M = |M|, and
M  ∀ x X ( x )s0 . Since s0 was an arbitrary X-variant of s, we are done.
Now assume that

M, s  ∀ X (( X (z) ∧ ∀ x ( X ( x ) → X (u( x )))) → ∀ x X ( x ))

for some s. Let m = s(z) and f = s(u) and consider M = {m, f (m), f ( f (m)), . . . }.
Let s0 be the X-variant of s with s( X ) = M. Then

M, s0  (( X (z) ∧ ∀ x ( X ( x ) → X (u( x )))) → ∀ x X ( x ))

by assumption. Also, M, s0  X (z) since s0 ( X ) = M 3 m = s0 (z), and also


M, s0  ∀ x ( X ( x ) → X (u( x ))) since whenever x ∈ M also f ( x ) ∈ M. So, since
both antecedent and conditional are satisfied, the consequent must also be:
M, s0  ∀ x X ( x ). But that means that M = |M|, and so |M| is enumerable.

Problems
Problem 35.1. Show that ∀ X ( X (v ) → X (v )) (note: → not ↔!) defines Id|M| .

Problem 35.2. The sentence Inf ∧ Count is true in all and only denumerable
domains. Adjust the definition of Count so that it becomes a different sentence
that directly expresses that the domain is denumerable, and prove that it does.

496 Release: (None) ((None))


Chapter 36

Metatheory of Second-order Logic

36.1 Introduction

First-order logic has a number of nice properties. We know it is not decidable,


but at least it is axiomatizable. That is, there are proof systems for first-order
logic which are sound and complete, i.e., they give rise to a derivability re-
lation ` with the property that for any set of sentences Γ and sentence Q,
Γ  ϕ iff Γ ` ϕ. This means in particular that the validities of first-order logic
are computably enumerable. There is a computable function f : N → Sent(L)
such that the values of f are all and only the valid sentences of L. This is so be-
cause derivations can be enumerated, and those that derive a single sentence
are then mapped to that sentence. Second-order logic is more expressive than
first-order logic, and so it is in general more complicated to capture its validi-
ties. In fact, we’ll show that second-order logic is not only undecidable, but
its validities are not even computably enumerable. This means there can be
no sound and complete proof system for second-order logic (although sound,
but incomplete proof systems are available and in fact are important objects
of research).

First-order logic also has two more properties: it is compact (if every fi-
nite subset of a set Γ of sentences is satisfiable, Γ itself is satisfiable) and the
Löwenheim-Skolem Theorem holds for it (if Γ has an infinite model it has a de-
numerable model). Both of these results fail for second-order logic. Again, the
reason is that second-order logic can express facts about the size of domains
that first-order logic cannot.

497
CHAPTER 36. METATHEORY OF SECOND-ORDER LOGIC

36.2 Second-order Arithmetic


Recall that the theory PA of Peano arithmetic includes the eight axioms of Q,

∀ x x0 6= 
∀ x ∀y ( x 0 = y0 → x = y)
∀ x ∀y ( x < y ↔ ∃z ( x + z0 ) = y)
∀ x ( x + ) = x
∀ x ∀y ( x + y0 ) = ( x + y)0
∀ x ( x × ) = 
∀ x ∀y ( x × y0 ) = (( x × y) + x )

plus all sentences of the form

( ϕ() ∧ ∀ x ( ϕ( x ) → ϕ( x 0 ))) → ∀ x ϕ( x )

The latter is a “schema,” i.e., a pattern that generates infinitely many sen-
tences of the language of arithmetic, one for each formula ϕ( x ). We call this
schema the (first-order) axiom schema of induction. In second-order Peano arith-
metic PA2 , induction can be stated as a single sentence. PA2 consists of the
first eight axioms above plus the (second-order) induction axiom:

∀ X ( X () ∧ ∀ x ( X ( x ) → X ( x 0 ))) → ∀ x X ( x ))

It says that if a subset X of the domain contains M and with any x ∈ |M| also
contains 0M ( x ) (i.e., it is “closed under successor”) it contains everything in
the domain (i.e., X = |M|).
The induction axiom guarantees that any structure satisfying it contains
only those elements of |M| the axioms require to be there, i.e., the values of n
for n ∈ N. A model of PA2 contains no non-standard numbers.

Theorem 36.1. If M  PA2 then |M| = {Valn ( M ) : n ∈ N}.

Proof. Let N = {ValM (n) : n ∈ N}, and suppose M  PA2 . Of course, for any
n ∈ N, ValM (n) ∈ |M|, so N ⊆ |M|.
Now for inclusion in the other direction. Consider a variable assignment s
with s( X ) = N. By assumption,

M  ∀ X ( X () ∧ ∀ x ( X ( x ) → X ( x 0 ))) → ∀ x X ( x ), thus


M, s  ( X () ∧ ∀ x ( X ( x ) → X ( x 0 ))) → ∀ x X ( x ).

Consider the antecedent of this conditional. ValM () ∈ N, and so M, s 


X (). The second conjunct, ∀ x ( X ( x ) → X ( x 0 )) is also satisfied. For suppose
x ∈ N. By definition of N, x = ValM (n) for some n. That gives 0M ( x ) =
ValM (n + 1) ∈ N. So, 0M ( x ) ∈ N.

498 Release: (None) ((None))


36.3. SECOND-ORDER LOGIC IS NOT AXIOMATIZABLE

We have that M, s  X () ∧ ∀ x ( X ( x ) → X ( x 0 )). Consequently, M, s 


∀ x X ( x ). But that means that for every x ∈ |M| we have x ∈ s( X ) = N. So,
|M| ⊆ N.

Corollary 36.2. Any two models of PA2 are isomorphic.

Proof. By ??, the domain of any model of PA2 is exhausted by ValM (n). Any
such model is also a model of Q. By ??, any such model is standard, i.e.,
isomorphic to N.

Above we defined PA2 as the theory that contains the first eight arith-
metical axioms plus the second-order induction axiom. In fact, thanks to the
expressive power of second-order logic, only the first two of the arithmetical
axioms plus induction are needed for second-order Peano arithmetic.

Proposition 36.3. Let PA2† be the second-order theory containing the first two arith-
metical axioms (the successor axioms) and the second-order induction axiom. >, +,
and × are definable in PA2† .

Proof. Exercise.

Corollary 36.4. M  PA2 iff M  PA2† .

Proof. Immediate from ??.

36.3 Second-order Logic is not Axiomatizable


Theorem 36.5. Second-order logic is undecidable.

Proof. A first-order sentence is valid in first-order logic iff it is valid in second-


order logic, and first-order logic is undecidable.

Theorem 36.6. There is no sound and complete proof system for second-order logic.

Proof. Let ϕ be a sentence in the language of arihmetic. N  ϕ iff PA2  ϕ.


Let P be the conjunction of the nine axioms of PA2 . PA2  ϕ iff  P → ϕ, i.e.,
M  P → ϕ . Now consider the sentence ∀z ∀u ∀u0 ∀u00 ∀ L ( P0 → ϕ0 ) resulting
by replacing  by z, 0 by the one-place function variable u, + and × by the
two-place function-variables u0 and u00 , respectively, and < by the two-place
relation variable L and universally quantifying. It is a valid sentence of pure
second-order logic iff the original sentence was valid iff PA2  ϕ iff N  ϕ.
Thus if there were a sound and complete proof system for second-order logic,
we could use it to define a computable enumeration f : N → Sent(L A ) of the
sentences true in N. This function would be representable in Q by some first-
order formula ψ f ( x, y). Then the formula ∃ x ψ f ( x, y) would define the set of
true first-order sentences of N, contradicting Tarski’s Theorem.

Release: (None) ((None)) 499


CHAPTER 36. METATHEORY OF SECOND-ORDER LOGIC

36.4 Second-order Logic is not Compact


Call a set of sentences Γ finitely satisfiable if every one of its finite subsets is
satisfiable. First-order logic has the property that if a set of sentences Γ is
finitely satisfiable, it is satisfiable. This property is called compactness. It has
an equivalent version involving entailment: if Γ  ϕ, then already Γ0  ϕ for
some finite subset Γ0 ⊆ Γ. In this version it is an immediate corollary of the
completeness theorem: for if Γ  ϕ, by completeness Γ ` ϕ. But a derivation
can only make use of finitely many sentences of Γ.
Compactness is not true for second-order logic. There are sets of second-
order sentences that are finitely satisfiable but not satisfiable, and that entail
some ϕ without a finite subset entailing ϕ.

Theorem 36.7. Second-order logic is not compact.

Proof. Recall that

Inf ≡ ∃u ∀ x ∀y (u( x ) = u(y) → x = y)

is satisfied in a structure iff its domain is infinite. Let ϕ≥n be a sentence that
asserts that the domain has at least n elements, e.g.,

ϕ ≥ n ≡ ∃ x 1 . . . ∃ x n ( x 1 6 = x 2 ∧ x 1 6 = x 3 ∧ · · · ∧ x n −1 6 = x n )

Consider
Γ = {¬Inf, ϕ≥1 , ϕ≥2 , ϕ≥3 , . . . }

It is finitely satisfiable, since for any finite subset Γ0 there is some k so that
ϕ≥k ∈ Γ but no ϕ≥n ∈ Γ for n > k. If |M| has k elements, M  Γ0 . But, Γ is not
satisfiable: if M  ¬Inf, |M| must be finite, say, of size k. Then M 2 ϕ≥k+1 .

36.5 The Löwenheim-Skolem Theorem Fails for


Second-order Logic
The (Downward) Löwenheim-Skolem Theorem states that every set of sen-
tences with an infinite model has an enumerable model. It, too, is a conse-
quence of the completeneness theorem: the proof of completeness generates
a model for any consistent set of sentences, and that model is enumerable.
There is also an Upward Löwenheim-Skolem Theorem, which guarantees that
if a set of sentences has a denumerable model it also has a non-enumerable
model. Both theorems fail in second-order logic.

Theorem 36.8. The Löwenheim-Skolem Theorem fails for second-order logic: There
are sentences with infinite models but no enumerable models.

500 Release: (None) ((None))


36.5. THE LÖWENHEIM-SKOLEM THEOREM FAILS FOR
SECOND-ORDER LOGIC
Proof. Recall that

Count ≡ ∃z ∃u ∀ X (( X (z) ∧ ∀ x ( X ( x ) → X (u( x )))) → ∀ x X ( x ))

is true in a structure M iff |M| is enumerable. So Inf ∧ ¬Count is true in M


iff |M| is both infinite and not enumerable. There are such structures—take
any non-enumerable set as the domain, e.g., ℘(N) or R. So Inf ∧ Count has
infinite models but no enumerable models.

Theorem 36.9. There are sentences with denumerable but not with non-enumerable
models.

Proof. Count ∧ Inf is true in N but not in any structure M with |M| non-
enumerable.

Problems
Problem 36.1. Prove ??.

Problem 36.2. Give an example of a set Γ and a sentence ϕ so that Γ  ϕ but


for every finite subset Γ0 ⊆ Γ, Γ0 2 ϕ.

Release: (None) ((None)) 501


Chapter 37

Second-order Logic and Set Theory

This section deals with coding powersets and the continuum in


second-order logic. The results are stated but proofs have yet to be filled
in. There are no problems yet—and the definitions and results themselves
may have problems. Use with caution and report anything that’s false or
unclear.

37.1 Introduction
Since second-order logic can quantify over subsets of the domain as well as
functions, it is to be expected that some amount, at least, of set theory can be
carried out in second-order logic. By “carry out,” we mean that it is possible
to express set theoretic properties and statements in second-order logic, and is
possible without any special, non-logical vocabulary for sets (e.g., the mem-
bership predicate symbol of set theory). For instance, we can define unions
and intersections of sets and the subset relationship, but also compare the
sizes of sets, and state results such as Cantor’s Theorem.

37.2 Comparing Sets


Proposition 37.1. The formula ∀ x ( X ( x ) → Y ( x )) defines the subset relation, i.e.,
M, s  ∀ x ( X ( x ) → Y ( x )) iff s( X ) ⊆ S(y).

Proposition 37.2. The formula ∀ x ( X ( x ) ↔ Y ( x )) defines the identity relation on


sets, i.e., M, s  ∀ x ( X ( x ) ↔ Y ( x )) iff s( X ) = S(y).

Proposition 37.3. The formula ∃ x X ( x ) defines the property of being non-empty,


i.e., M, s  ∃ x X ( x ) iff s( X ) 6= ∅.

502
37.3. CARDINALITIES OF SETS

A set X is no larger than a set Y, X  Y, iff there is an injective function


f : X → Y. Since we can express that a function is injective, and also that its
values for arguments in X are in Y, we can also define the relation of being no
larger than on subsets of the domain.

Proposition 37.4. The formula

∃u (∀ x ( X ( x ) → Y (u( x ))) ∧ ∀ x ∀y (u( x ) = u(y) → x = y))

defines the relation of being no larger than.

Two sets are the same size, or “equinumerous,” X ≈ Y, iff there is a bijec-
tive function f : X → Y.

Proposition 37.5. The formula

∃u (∀ x ( X ( x ) → Y (u( x ))) ∧
∀ x ∀y (u( x ) = u(y) → x = y) ∧
∀y (Y (y) → ∃ x ( X ( x ) ∧ y = u( x ))))

defines the relation of being equinumerous with.

We will abbreviate these formulas, respectively, as X ⊆ Y, X = Y, X 6=


∅, X  Y, and X ≈ Y. (This may be slightly confusing, since we use the
same notation when we speak informally about sets X and Y—but here the
notation is an abbreviation for formulas in second-order logic involving one-
place relation variables X and Y.)

Proposition 37.6. The sentence ∀ X ∀Y (( X  Y ∧ Y  X ) → X ≈ Y ) is valid.

Proof. The is satisfied in a structure M if, for any subsets X ⊆ |X| and Y ⊆ |M|,
if X  Y and Y  X then X ≈ Y. But this holds for any sets X and Y—it is the
Schröder-Bernstein Theorem.

37.3 Cardinalities of Sets


Just as we can express that the domain is finite or infinite, enumerable or non-
enumerable, we can define the property of a subset of |M| being finite or infi-
nite, enumerable or non-enumerable.

Proposition 37.7. The formula Inf( X ) ≡

∃u (∀ x ∀y (u( x ) = u(y) → x = y) ∧
∃y ( X (y) ∧ ∀ x ( X ( x ) → y 6= u( x )))

is satisfied with respect to a variable assignment s iff s( X ) is infinite.

Release: (None) ((None)) 503


CHAPTER 37. SECOND-ORDER LOGIC AND SET THEORY

Proposition 37.8. The formula Count( X ) ≡

∃z ∃u ( X (z) ∧ ∀ x ( X ( x ) → X (u( x ))) ∧


∀Y ((Y (z) ∧ ∀ x (Y ( x ) → Y (u( x )))) → X = Y ))

is satisfied with respect to a variable assignment s iff s( X ) is enumerable

We know from Cantor’s Theorem that there are non-enumerable sets, and
in fact, that there are infinitely many different levels of infinite sizes. Set the-
ory develops an entire arithmetic of sizes of sets, and assigns infinite cardinal
numbers to sets. The natural numbers serve as the cardinal numbers measur-
ing the sizes of finite sets. The cardinality of denumerable sets is the first infi-
nite cardinality, called ℵ0 (“aleph-nought” or “aleph-zero”). The next infinite
size is ℵ1 . It is the smallest size a set can be without being countable (i.e., of
size ℵ0 ). We can define “X has size ℵ0 ” as Aleph0 ( X ) ↔ Inf( X ) ∧ Count( X ).
X has size ℵ1 iff all its subsets are finite or have size ℵ0 , but is not itself of
size ℵ0 . Hence we can express this by the formula Aleph1 ( X ) ≡ ∀Y (Y ⊆
X → (¬Inf(Y ) ∨ Aleph0 (Y ))) ∧ ¬Aleph0 ( X ). Being of size ℵ2 is defined simi-
larly, etc.
There is one size of special interest, the so-called cardinality of the contin-
uum. It is the size of ℘(N), or, equivalently, the size of R. That a set is the size
of the continuum can also be expressed in second-order logic, but requires a
bit more work.

37.4 The Power of the Continuum


In second-order logic we can quantify over subsets of the domain, but not over
sets of subsets of the domain. To do this directly, we would need third-order
logic. For instance, if we wanted to state Cantor’s Theorem that there is no
injective function from the power set of a set to the set itself, we might try to
formulate it as “for every set X, and every set P, if P is the power set of X, then
not P  X. And to say that P is the power set of X would require formalizing
that the elements of P are all and only the subsets of X, so something like
∀Y ( P(Y ) ↔ Y ⊆ X ). The problem lies in P(Y ): that is not a formula of second-
order logic, since only terms can be arguments to one-place relation variables
like P.
We can, however, simulate quantification over sets of sets, if the domain is
large enough. The idea is to make use of the fact that two-place relations R re-
lates elements of the domain to elements of the domain. Given such an R, we
can collect all the elements to which some x is R-related: {y ∈ |M| : R( x, y)}
is the set “coded by” x. Converseley, if Z ⊆ ℘(|M|) is some collection of sub-
sets of |M|, and there are at least as many elements of |M| as there are sets
in Z, then there is also a relation R ⊆ |M|2 such that every Y ∈ Z is coded by
some x using R.

504 Release: (None) ((None))


37.4. THE POWER OF THE CONTINUUM

Definition 37.9. If R ⊆ |M|2 , then x R-codes {y ∈ |M| : R( x, y)}. Y R-codes


℘( X ) iff for every Z ⊆ X, some x ∈ Y R-codes Y, and every x ∈ Y R-codes
some Y ∈ Z.

Proposition 37.10. The formula

Codes( x, R, Y ) ≡ ∀y (Y (y) ↔ R( x, y))

expresses that s( x ) s( R)-codes s(Y ). The formula

Pow(Y, R, X ) ≡
∀ Z ( Z ⊆ X → ∃ x (Y ( x ) ∧ Codes( x, R, Z ))) ∧
∀ x (Y ( x ) → ∀ Z (Codes( x, R, Z ) → Z ⊆ X )

expresses that s(Y ) s( R)-codes the power set of s( X ).

With this trick, we can express statements about the power set by quantify-
ing over the codes of subsets rather than the subsets themselves. For instance,
Cantor’s Theorem can now be expressed by saying that there is no injective
function from the domain of any relation that codes the power set of X to X
itself.

Proposition 37.11. The sentence

∀ X ∀ R (Pow( R, X )→
¬∃u (∀ x ∀y (u( x ) = u(y) → x = y)∧
∀Y (Codes( x, R, Y ) → X (u( x )))))

is valid.

The power set of a denumerable set is non-enumerable, and so its cardinal-


ity is larger than that of any denumerable set (which is ℵ0 ). The size of ℘(R)
is called the “power of the continuum,” since it is the same size as the points
on the real number line, R. If the domain is large enough to code the power
set of a denumerable set, we can express that a set is the size of the continuum
by saying that it is equinumerous with any set Y that codes the power set of
set X of size ℵ0 . (If the domain is not large enough, i.e., it contains no subset
equinumerous with R, then there can also be no relation that codes ℘( X ).)

Proposition 37.12. If R  |M|, then the formula

Cont( X ) ≡ ∀ X ∀Y ∀ R ((Aleph0 ( X ) ∧ Pow(Y, R, X )) → ¬Y  X )

expresses that s( X ) ≈ R.

Release: (None) ((None)) 505


CHAPTER 37. SECOND-ORDER LOGIC AND SET THEORY

Proposition 37.13. |M| ≈ R iff

M  ∃ X ∃Y ∃ R (Aleph0 ( X ) ∧ Pow(Y, R, X )∧
∃u (∀ x ∀y (u( x ) = u(y) → x = y) ∧
∀y (Y (y) → ∃ x y = u( x ))))

The Continuum Hypothesis is the statement that the size of the continuum
is the first non-enumerable cardinality, i.e, that ℘(N) has size ℵ1 .

Proposition 37.14. The Continuum Hypothesis is true iff

CH ≡ ∀ X (Aleph1 ( X ) ↔ Cont( x ))

is valid.

Note that it isn’t true that ¬CH is valid iff the Continuum Hypothesis is
false. In an enumerable domain, there are no subsets of size ℵ1 and also no
subsets of the size of the continuum, so CH is always true in an enumerable
domain. However, we can give a different sentence that is valid iff the Con-
tinuum Hypothesis is false:

Proposition 37.15. The Continuum Hypothesis is false iff

NCH ≡ ∀ X (Cont( X ) → ∃Y (Y ⊆ X ∧ ¬Count( X ) ∧ ¬ X ≈ Y ))

is valid.

506 Release: (None) ((None))


Part IX

Normal Modal Logics

507
CHAPTER 37. SECOND-ORDER LOGIC AND SET THEORY

This part covers the metatheory of normal modal logics. It currently


consists of Aldo Antonelli’s notes on classical correspondence theory for
basic modal logic.

508 Release: (None) ((None))


Chapter 38

Syntax and Semantics of Normal


Modal Logics

38.1 Introduction
Modal Logic deals with modal propositions and the entailment relations among
them. Examples of modal propositions are the following:

1. It is necessary that 2 + 2 = 4.

2. It is necessarily possible that it will rain tomorrow.

3. If it is necessarily possible that ϕ then it is possible that ϕ.

Possibility and necessity are not the only modalities: other unary connectives
are also classified as modalities, for instance, “it ought to be the case that ϕ,”
“It will be the case that ϕ,” “Dana knows that ϕ,” or “Dana believes that ϕ.”
Modal logic makes its first appearance in Aristotle’s De Interpretatione: he
was the first to notice that necessity implies possibility, but not vice versa; that
possibility and necessity are inter-definable; that If ϕ ∧ ψ is possibly true then
ϕ is possibly true and ψ is possibly true, but not conversely; and that if ϕ → ψ
is necessary, then if ϕ is necessary, so is ψ.
The first modern approach to modal logic was the work of C. I. Lewis, cul-
minating with Lewis and Langford, Symbolic Logic (1932). Lewis & Langford
were unhappy with the representation of implication by means of the material
conditional: ϕ → ψ is a poor substitute for “ϕ implies ψ.” Instead, they pro-
posed to characterize implication as “Necessarily, if ϕ then ψ,” symbolized
as ϕ J ψ. In trying to sort out the different properties, Lewis indentified five
different modal systems, S1, . . . , S4, S5, the last two of which are still in use.
The approach of Lewis and Langford was purely syntactical: they identi-
fied reasonable axioms and rules and investigated what was provable with
those means. A semantic approach remained elusive for a long time, until a

509
CHAPTER 38. SYNTAX AND SEMANTICS OF NORMAL MODAL LOGICS

first attempt was made by Rudolf Carnap in Meaning and Necessity (1947) us-
ing the notion of a state description, i.e., a collection of atomic sentences (those
that are “true” in that state description). After lifting the truth definition to
arbitrary sentences ϕ, Carnap defines ϕ to be necessarily true if it is true in all
state descriptions. Carnap’s approach could not handle iterated modalities, in
that sentences of the form “Possibly necessarily . . . possibly ϕ” always reduce
to the innermost modality.
The major breakthrough in modal semantics came with Saul Kripke’s arti-
cle “A Completeness Theorem in Modal Logic” (JSL 1959). Kripke based his
work on Leibniz’s idea that a statement is necessarily true if it is true “at all
possible worlds.” This idea, though, suffers from the same drawbacks as Car-
nap’s, in that the truth of statement at a world w (or a state description s) does
not depend on w at all. So Kripke assumed that worlds are related by an ac-
cessibility relation R, and that a statement of the form “Necessarily ϕ” is true at
a world w if and only if ϕ is true at all worlds w0 accessible from w. Semantics
that provide some version of this approach are called Kripke semantics and
made possible the tumultuous development of modal logics (in the plural).
When interpreted by the Kripke semantics, modal logic shows us what re-
lational structures look like “from the inside.” A relational structure is just a set
equipped with a binary relation (for instance, the set of students in the class
ordered by their social security number is a relational structure). But in fact re-
lational structures come in all sorts of domains: besides relative possibility of
states of the world, we can have epistemic states of some agent related by epis-
temic possibility, or states of a dynamical system with their state transitions,
etc. Modal logic can be used to model all of these: the first give us ordinary,
alethic, modal logic; the others give us epistemic logic, dynamic logic, etc.
We focus on one particular angle, known to modal logicians as “corre-
spondence theory.” One of the most significant early discoveries of Kripke’s
is that many properties of the accessibility relation R (whether it is transitive,
symmetric, etc.) can be characterized in the modal language itself by means
of appropriate “modal schemas.” Modal logicians say, for instance, that the
reflexivity of R “corresponds” to the schema “If necessarily ϕ, then ϕ”. We
explore mainly the correspondence theory of a number of classical systems of
modal logic (e.g., S4 and S5) obtained by a combination of the schemas D, T,
B, 4, and 5.

38.2 The Language of Basic Modal Logic


Definition 38.1. The basic language of modal logic contains

1. The propositional constant for falsity ⊥.

2. A denumerable set of propositional variables: p0 , p1 , p2 , . . .

510 Release: (None) ((None))


38.3. SIMULTANEOUS SUBSTITUTION

3. The propositional connectives: ¬ (negation), ∧ (conjunction), ∨ (disjunc-


tion), → (conditional).

4. The modal operator .

5. The modal operator ♦.

Definition 38.2. Formulas of the basic modal language are inductively defined
as follows:

1. ⊥ is an atomic formula.

2. Every propositional variable pi is an (atomic) formula.

3. If ϕ and ψ are formulas, then ( ϕ ∧ ψ) is a formula.

4. If ϕ and ψ are formulas, then ( ϕ ∨ ψ) is a formula.

5. If ϕ and ψ are formulas, then ( ϕ → ψ) is a formula.

6. If ϕ is a formula, so is ϕ.

7. If ϕ is a formula, then ♦ϕ is a formula.

8. Nothing else is a formula.

If a formula ϕ does not contain  or ♦, we say it is modal-free.

38.3 Simultaneous Substitution


An instance of a formula ϕ is the result of replacing all occurrences of a propo-
sitional variable in ϕ by some other formula. We will refer to instances of for-
mulas often, both when discussing validity and when discussing derivability.
It therefore is useful to define the notion precisely.

Definition 38.3. Where ϕ is a modal formula all of whose propositional vari-


ables are among p1 , . . . , pn , and θ1 , . . . , θn are also modal formulas, we define
ϕ[θ1 /p1 , . . . , θn /pn ] as the result of simultaneously substituting each θi for pi
in A. Formally, this is a definition by induction on ϕ:

1. ϕ ≡ ⊥: ϕ[θ1 /p1 , . . . , θn /pn ] is ⊥.

2. ϕ ≡ q: ϕ[θ1 /p1 , . . . , θn /pn ] is q, provided q 6≡ pi for i = 1, . . . , n.

3. ϕ ≡ pi : ϕ[θ1 /p1 , . . . , θn /pn ] is θi .

4. ϕ ≡ ¬ψ: ϕ[θ1 /p1 , . . . , θn /pn ] is ¬ψ[θ1 /p1 , . . . , θn /pn ].

5. ϕ ≡ (ψ ∧ χ): ϕ[θ1 /p1 , . . . , θn /pn ] is

(ψ[θ1 /p1 , . . . , θn /pn ] ∧ θ [θ1 /p1 , . . . , θn /pn ]).

Release: (None) ((None)) 511


CHAPTER 38. SYNTAX AND SEMANTICS OF NORMAL MODAL LOGICS

6. ϕ ≡ (ψ ∨ χ): ϕ[θ1 /p1 , . . . , θn /pn ] is

(ψ[θ1 /p1 , . . . , θn /pn ] ∨ θ [θ1 /p1 , . . . , θn /pn ]).

7. ϕ ≡ (ψ → χ): ϕ[θ1 /p1 , . . . , θn /pn ] is

(ψ[θ1 /p1 , . . . , θn /pn ] → θ [θ1 /p1 , . . . , θn /pn ]).

8. ϕ ≡ (ψ ↔ χ): ϕ[θ1 /p1 , . . . , θn /pn ] is

(ψ[θ1 /p1 , . . . , θn /pn ] ↔ θ [θ1 /p1 , . . . , θn /pn ]).

9. ϕ ≡ ψ: ϕ[θ1 /p1 , . . . , θn /pn ] is ψ[θ1 /p1 , . . . , θn /pn ].

10. ϕ ≡ ♦ψ: ϕ[θ1 /p1 , . . . , θn /pn ] is ♦ψ[θ1 /p1 , . . . , θn /pn ].

The formula ϕ[θ1 /p1 , . . . , θn /pn ] is called a substitution instance of ϕ.

Example 38.4. Suppose ϕ is p1 → ( p1 ∧ p2 ), θ1 is ♦( p2 → p3 ) and D2 is ¬p1 .


Then ϕ[θ1 /p1 , θ2 /p2 ] is

♦( p2 → p3 ) → (♦( p2 → p3 ) ∧ ¬p1 )

while ϕ[θ2 /p1 , θ1 /p2 ] is

¬p1 → (¬p1 ∧ ♦( p2 → p3 ))

Note that simultaneous substitution is in general not the same as iterated sub-
stitution, e.g., compare ϕ[θ1 /p1 , θ2 /p2 ] above with ϕ[θ1 /p1 ][θ2 /p2 ]:

♦(¬p1 → p3 ) → (♦(¬p1 → p3 ) ∧ ¬p1 )

and with ϕ[θ2 /p2 ][θ1 /p1 ]:

♦(¬p1 → p3 ) → (♦(¬p1 → p3 ) ∧ ¬♦(¬p1 → p3 ))

38.4 Relational Models


The basic concept of semantics for normal modal logics is that of a relational
model. It consists of a set of worlds, which are related by a binary “accessibility
relation,” together with an assignment which determines which propositional
variables count as “true” at which worlds.

Definition 38.5. A model for the basic modal language is a triple M = hW, R, V i,
where

512 Release: (None) ((None))


38.5. TRUTH AT A WORLD

p
w2
q

p
w1
¬q

¬p
w3
¬q

Figure 38.1: A simple model.

1. W is a nonempty set of “worlds,”

2. R is a binary accessibility relation on W, and

3. V is a function assigning to each propositional variable p a set V ( p) of


possible worlds.

When Rww0 holds, we say that w0 is accessible from w. When w ∈ V ( p) we say


p is true at w.

The great advantage of relational semantics is that models can be repre-


sented by means of simple diagrams, such as the one in ??. Worlds are rep-
resented by nodes, and world w0 is accessible from w precisely when there is
an arrow from w to w0 . Moreover, we label a node (world) by p when w ∈
V ( p), and otherwise by ¬ p. ?? represents the model with W = {w1 , w2 , w3 },
R = {hw1 , w2 i, hw1 , w3 i}, V ( p) = {w1 , w2 }, and V (q) = {w2 }.

38.5 Truth at a World


Every modal model determines which modal formulas count as true at which
worlds in it. The relation “model M makes formula ϕ true at world w” is the
basic notion of relational semantics. The relation is defined inductively and
coincides with the usual characterization using truth tables for the non-modal
operators.

Definition 38.6. Truth of a formula ϕ at w in a M, in symbols: M, w ϕ, is


defined inductively as follows:

1. ϕ ≡ ⊥: Never M, w ⊥.
2. M, w p iff w ∈ V ( p)

3. ϕ ≡ ¬ψ: M, w ϕ iff M, w 1 ψ.

Release: (None) ((None)) 513


CHAPTER 38. SYNTAX AND SEMANTICS OF NORMAL MODAL LOGICS

4. ϕ ≡ (ψ ∧ χ): M, w ϕ iff M, w ψ and M, w χ.

5. ϕ ≡ (ψ ∨ χ): M, w ϕ iff M, w ψ or M, w χ (or both).

6. ϕ ≡ (ψ → χ): M, w ϕ iff M, w 1 ψ or M, w χ.

7. ϕ ≡ ψ: M, w ϕ iff M, w0 ψ for all w0 ∈ W with Rww0

8. ϕ ≡ ♦ψ: M, w ϕ iff M, w0 ψ for at least one w0 ∈ W with Rww0

Note that by clause ??, a formula ψ is true at w whenever there are no w0


with wRw0 . In such a case ψ is vacuously true at w. Also, ψ may be satisfied
at w even if ψ is not. The truth of ψ at w does not guarantee the truth of ♦ψ
at w. This holds, however, if Rww, e.g., if R is reflexive. If there is no w0 such
that Rww0 , then M, w 1 ♦ϕ, for any ϕ.

Proposition 38.7. 1. M, w ϕ iff M, w ¬♦¬ ϕ.

2. M, w ♦ϕ iff M, w ¬¬ ϕ.

Proof. 1. M, w ¬♦¬ ϕ iff M 1 ♦¬ ϕ by definition of M, w . M, w ♦¬ ϕ


iff for some w0 with Rww0 , M, w0 ¬ ϕ. Hence, M, w 1 ♦¬ ϕ iff for all
w with Rww , M, w 1 ¬ ϕ. We also have M, w0 1 ¬ ϕ iff M, w0
0 0 0 ϕ.
Together we have M, w ¬♦¬ ϕ iff for all w0 with Rww0 , M, w0 ϕ.
Again by definition of M, w , that is the case iff M, w ϕ.

2. Exercise.

38.6 Truth in a Model


Sometimes we are interested which formulas are true at every world in a given
model. Let’s introduce a notation for this.

Definition 38.8. A formula ϕ is true in a model M = hW, R, V i, written M ϕ,


if and only if M, w ϕ for every w ∈ W.

Proposition 38.9. 1. If M ϕ then M 1 ¬ ϕ, but not vice-versa.

2. If M ϕ → ψ then M ϕ only if M ψ, but not vice-versa.

Proof. 1. If M ϕ then ϕ is true at all worlds in W, and since W 6= ∅, it


can’t be that M ¬ ϕ, or else ϕ would have to be both true and false at
some world.
On the other hand, if M 1 ¬ ϕ then ϕ is true at some world w ∈ W.
It does not follow that M, w ϕ for every w ∈ W. For instance, in the
model of ??, M 1 ¬ p, and also M 1 p.

514 Release: (None) ((None))


38.7. VALIDITY

2. Assume M ϕ → ψ and M ϕ; to show M ψ let w ∈ W be an


arbitrary world. Then M, w ϕ → ψ and M, w ψ, so M, w ψ, and
since w was arbitrary, M ψ.
To show that the converse fails, we need to find a model M such that
M ϕ only if M ψ, but M 1 ϕ → ψ. Consider again the model of
??: M 1 p and hence (vacuously) M p only if M q. However,
M 1 p → q, as p is true but q false at w1 .

38.7 Validity
Formulas that are true in all models, i.e., true at every world in every model,
are particularly interesting. They represent those modal propositions which
are true regardless of how  and ♦ are interpreted, as long as the interpreta-
tion is “normal” in the sense that it is generated by some accessibility relation
on possible worlds. We call such formulas valid. For instance, ( p ∧ q) → p
is valid. Some formulas one might expect to be valid on the basis of the alethic
interpretation of , such as p → p, are not valid, however. Part of the interest
of relational models is that different interpretations of  and ♦ can be captured
by different kinds of accessibility relations. This suggests that we should de-
fine validity not just relative to all models, but relative to all models of a certain
kind. It will turn out, e.g., that p → p is true in all models where every world
is accessible from itself, i.e., R is reflexive. Defining validity relative to classes
of models enables us to formulate this succinctly: p → p is valid in the class
of reflexive models.

Definition 38.10. A formula ϕ is valid in a class C of models if it is true in


every model in C (i.e., true at every world in every model in C ). If ϕ is valid
in C , we write C  ϕ, and we write  ϕ if ϕ is valid in the class of all models.

Proposition 38.11. If ϕ is valid in C it is also valid in each class C 0 ⊆ C .

Proposition 38.12. If ϕ is valid, then so is ϕ.

Proof. Assume  ϕ. To show  ϕ let M = hW, R, V i be a model and w ∈ W.


If Rww0 then M, w0 ϕ, since ϕ is valid, and so also M, w ϕ. Since M and
w were arbitrary,  ϕ.

38.8 Tautological Instances


A modal-free formula is a tautology if it is true under every truth-value as-
signment. Clearly, every tautology is true at every world in every model. But
for formulas involving  and ♦, the notion of tautology is not defined. Is it the
case, e.g., that p ∨ ¬p—an instance of the principle of excluded middle—is

Release: (None) ((None)) 515


CHAPTER 38. SYNTAX AND SEMANTICS OF NORMAL MODAL LOGICS

valid? The notion of a tautological instance helps: a formula that is a substitu-


tion instance of a (non-modal) tautology. It is not surprising, but still requires
proof, that every tautological instance is valid.

Definition 38.13. A modal formula ψ is a tautological instance if and only if


there is a modal-free tautology ϕ with propositional variables p1 , . . . , pn and
formulas θ1 , . . . , θn such that ψ ≡ ϕ[θ1 /p1 , . . . , θn /pn ].

Lemma 38.14. Suppose ϕ is a modal-free formula whose propositional variables are


p1 , . . . , pn , and let θ1 , . . . , θn be modal formulas. Then for any assignment v, any
model M = hW, R, V i, and any w ∈ W such that v( pi ) = T if and only if M, w θi
we have that v  ϕ if and only if M, w ϕ[θ1 /p1 , . . . , θn /pn ].

Proof. By induction on ϕ.

1. ϕ ≡ ⊥: Both v 2 ⊥ and M, w 1 ⊥.

2. ϕ ≡ pi :

v  p i ⇔ v( p i ) = T by definition of v  pi ;
⇔ M, w θi by assumption
⇔ M, w pi [θ1 /p1 , . . . , θn /pn ] since pi [θ1 /p1 , . . . , θn /pn ] ≡ θi .

3. ϕ ≡ ¬ψ:

v  ¬ψ ⇔ v 2 ψ by definition of v ;
⇔ M, w 1 ψ[θ1 /p1 , . . . , θn /pn ] by induction hypothesis;
⇔ M, w ¬ψ[θ1 /p1 , . . . , θn /pn ] by definition of v .

4. ϕ ≡ (ψ ∧ χ):

v  ψ ∧ χ ⇔ v  ψ and v  χ by definition of v ;
⇔ M, w ψ[θ1 /p1 , . . . , θn /pn ] and
M, w χ[θ1 /p1 , . . . , θn /pn ], by induction hypothesis;
⇔ M, w (ψ ∧ χ)[θ1 /p1 , . . . , θn /pn ] by definition of M, w .

5. ϕ ≡ (ψ ∨ χ):

v  ψ ∨ χ ⇔ v  ψ or v  χ by definition of v ;
⇔ M, w ψ[θ1 /p1 , . . . , θn /pn ] or
M, w χ[θ1 /p1 , . . . , θn /pn ], by induction hypothesis;
⇔ M, w (ψ ∨ χ)[θ1 /p1 , . . . , θn /pn ] by definition of M, w .

516 Release: (None) ((None))


38.9. SCHEMAS AND VALIDITY

6. ϕ ≡ (ψ → χ):
v  ψ → χ ⇔ v 2 ψ or v  χ by definition of v ;
⇔ M, w 1 ψ[θ1 /p1 , . . . , θn /pn ] or
M, w χ[θ1 /p1 , . . . , θn /pn ], by induction hypothesis;
⇔ M, w (ψ → χ)[θ1 /p1 , . . . , θn /pn ] by definition of M, w .

Proposition 38.15. All tautological instances are valid.

Proof. Contrapositively, suppose ϕ is such that M, w 1 ϕ[θ1 /p1 , . . . , θn /pn ],


for some model M and world w. Define an assignment v such that v( pi ) = T
if and only if M, w θi (and v assigns arbitrary values to q ∈
/ { p1 , . . . , pn }).
Then by ??, v 2 ϕ, so ϕ is not a tautology.

38.9 Schemas and Validity


Definition 38.16. A schema is a set of formulas comprising all and only the
substitution instances of some modal formula χ, i.e.,
{ψ : ∃θ1 , . . . , ∃θn (ψ = χ[θ1 /p1 , . . . , θn /pn ])}.
The formula χ is called the characteristic formula of the schema, and it is
unique up to a renaming of the propositional variables. A formula ϕ is an
instance of a schema if it is a member of the set.
It is convenient to denote a schema by the meta-linguistic expression ob-
tained by substituting ‘ϕ’, ‘ψ’, . . . , for the atomic components of χ. So, for
instance, the following denote schemas: ‘ϕ’, ‘ϕ → ϕ’, ‘ϕ → (ψ → ϕ)’. They
correspond to the characteristic formulas p, p → p, p → (q → p). The schema
‘ϕ’ denotes the set of all formulas.
Definition 38.17. A schema is true in a model if and only if all of its instances
are; and a schema is valid if and only if it is true in every model.
Proposition 38.18. The following schema K is valid
( ϕ → ψ) → (ϕ → ψ). (K)
Proof. We need to show that all instances of the schema are true at every world
in every model. So let M = hW, R, V i and w ∈ W be arbitrary. To show that
a conditional is true at a world we assume the antecedent is true to show that
consequent is true as well. In this case, let M, w ( ϕ → ψ) and M, w ϕ.
We need to show M ψ. So let w0 be arbitrary such that Rww0 . Then by the
first assumption M, w0 ϕ → ψ and by the second assumption M, w0 ϕ. It
follows that M, w0 ψ. Since w0 was arbitrary, M, w ψ.

Release: (None) ((None)) 517


CHAPTER 38. SYNTAX AND SEMANTICS OF NORMAL MODAL LOGICS

Valid Schemas Invalid Schemas


( ϕ → ψ) → (♦ϕ → ♦ψ) ( ϕ ∨ ψ) → (ϕ ∨ ψ)
♦( ϕ → ψ) → (ϕ → ♦ψ) (♦ϕ ∧ ♦ψ) → ♦( ϕ ∧ ψ)
( ϕ ∧ ψ) ↔ (ϕ ∧ ψ) ϕ → ϕ
ϕ → (ψ → ϕ) ♦ϕ → ψ
¬♦ϕ → ( ϕ → ψ) ϕ → ϕ
♦( ϕ ∨ ψ) ↔ (♦ϕ ∨ ♦ψ) ♦ϕ → ♦ϕ.
Table 38.1: Valid and (or?) invalid schemas.

Proposition 38.19. The following schema DUAL is valid

♦ϕ ↔ ¬¬ ϕ. (DUAL)

Proof. Exercise.

Proposition 38.20. If ϕ and ϕ → ψ are true at a world in a model then so is ψ.


Hence, the valid formulas are closed under modus ponens.

Proposition 38.21. A formula ϕ is valid iff all its substitution instances are. In
other words, a schema is valid iff its characteristic formula is.

Proof. The “if” direction is obvious, since ϕ is a substitution instance of itself.


To prove the “only if” direction, we show the following: Suppose M =
hW, R, V i is a modal model, and ψ ≡ ϕ[ D1 /p1 , . . . , Dn /pn ] is a substitution
instance of ϕ. Define M0 = hW, R, V 0 i by V ( pi ) = {w : M θi w}. Then
M ψw iff M0 ϕw, for any w ∈ W. (We leave the proof as an exercise.)
Now suppose that ϕ was valid, but some substitution instance ψ of ϕ was not
valid. Then for some M = hW, R, V i and some w ∈ W, M 1 ψw. But then
M0 1 ϕw by the claim, and ϕ is not valid, a contradiction.

Note, however, that it is not true that a schema is true in a model iff its
characteristic formula is. Of course, the “only if” direction holds: if every
instance of ϕ is true in M, ϕ itself is true in M. But it may happen that ϕ
is true in M but some instance of ϕ is false at some world in M. For a very
simple counterexample consider p in a model with only one world w and
V ( p) = {w}, so that p is true at w. But ⊥ is an instance of p, and not true at w.

38.10 Entailment
With the definition of truth at a world, we can define an entailment relation
between formulas. A formula ψ entails ϕ iff, whenever ψ is true, ϕ is true as
well. Here, “whenever” means both “whichever model we consider” as well
as “whichever world in that model we consider.”

518 Release: (None) ((None))


38.10. ENTAILMENT

w2 p w3 p

w1 ¬ p

Figure 38.2: Counterexample to p → ♦p  p → p.

Definition 38.22. If Γ is a set of formulas and ϕ a formula, then Γ entails ϕ,


in symbols: Γ  ϕ, if and only if for every model M = hW, R, V i and world
w ∈ W, if M, w ψ for every ψ ∈ Γ, then M, w ϕ. If Γ contains a single
formula ψ, then we write ψ  ϕ.

Example 38.23. To show that a formula entails another, we have to reason


about all models, using the definition of M, w . For instance, to show p →
♦p  ¬ p → ¬ p, we might argue as follows: Consider a model M = hW, R, V i
and w ∈ W, and suppose M, w p → ♦p. We have to show that M, w
¬ p → ¬ p. Suppose not. Then M, w ¬ p and M, w 1 ¬ p. Since M, w 1
¬ p, M, w p. By assumption, M, w p → ♦p, hence M, w ♦p. By defini-
tion of M, w ♦p, there is some w0 with Rww0 such that M, w0 p. Since also
M, w ¬ p, M, w0 ¬ p, a contradiction.
To show that a formula ψ does not entail another ϕ, we have to give a
counterexample, i.e., a model M = hW, R, V i where we show that at some
world w ∈ W, M, w ψ but M, w 1 ϕ. Let’s show that p → ♦p 2 p → p.
Consider the model in ??. We have M, w1 ♦p and hence M, w1 p → ♦p.
However, since M, w1 p but M, w1 1 p, we have M, w1 1 p → p.
Often very simple counterexamples suffice. The model M0 = {W 0 , R0 , V 0 }
with W 0 = {w}, R0 = ∅, and V 0 ( p) = ∅ is also a counterexample: Since
M0 , w 1 p, M0 , w p → ♦p. As no worlds are accessible from w, we have
M0 , w p, and so M0 , w 1 p → p.

Problems
Problem 38.1. Consider the model of ??. Which of the following hold?

1. M, w1 q;

2. M, w3 ¬q;
3. M, w1 p ∨ q;

4. M, w1 ( p ∨ q );

5. M, w3 q;

Release: (None) ((None)) 519


CHAPTER 38. SYNTAX AND SEMANTICS OF NORMAL MODAL LOGICS

6. M, w3 ⊥;

7. M, w1 ♦q;

8. M, w1 q;

9. M, w1 ¬¬q.
Problem 38.2. Complete the proof of ??.

Problem 38.3. Let M = hW, R, V i be a model, and suppose w1 , w2 ∈ W are


such that:

1. w1 ∈ V ( p) if and only if w2 ∈ V ( p); and

2. for all w ∈ W: Rw1 w if and only if Rw2 w.

Using induction on formulas, show that for all formulas ϕ: M, w1 ϕ if and


only if M, w2 ϕ.

Problem 38.4. Let M = h M, R, V i. Show that M, w ¬♦ϕ if and only if


M, w ¬ ϕ.

Problem 38.5. Consider the following model M for the language comprising
p1 , p2 , p3 as the only propositional variables:

p1 p1
¬ p 2 w1 w3 p2
¬ p3 p3

p1
w2 p2
¬ p3

Are the following formulas and schemas true in the model M, i.e., true at
every world in M? Explain.

1. p → ♦p (for p atomic);

2. ϕ → ♦ϕ (for ϕ arbitrary);

3. p → p (for p atomic);

4. ¬ p → ♦p (for p atomic);

5. ♦ϕ (for ϕ arbitrary);

6. ♦p (for p atomic).

Problem 38.6. Show that the following are valid:

520 Release: (None) ((None))


38.10. ENTAILMENT

1.  p → (q → p);

2.  ¬⊥;

3.  p → (q → p).

Problem 38.7. Show that ϕ → ϕ is valid in the class C of models M =


hW, R, V i where W = {w}. Similarly, show that ψ → ϕ and ♦ϕ → ψ are
valid in the class of models M = hW, R, V i where R = ∅.

Problem 38.8. Prove ??.

Problem 38.9. Prove the claim in the “only if” part of the proof of ??. (Hint:
use induction on ϕ.)

Problem 38.10. Show that none of the following formulas are valid:

D: p → ♦p;

T: p → p;

B: p → ♦p;

4: p → p;

5: ♦p → ♦p.

Problem 38.11. Prove that the schemas in the first column of ?? are valid and
those in the second column are not valid.

Problem 38.12. Decide whether the following schemas are valid or invalid:

1. (♦ϕ → ψ) → (ϕ → ψ);

2. ♦( ϕ → ψ) ∨ (ψ → ϕ).

Problem 38.13. For each of the following schemas find a model M such that
every instance of the formula is true in M:

1. p → ♦♦p;

2. ♦p → p.

Problem 38.14. Show that ( ϕ ∧ ψ)  ϕ.

Problem 38.15. Show that ( p → q) 2 p → q and p → q 2 ( p → q).

Release: (None) ((None)) 521


Chapter 39

Frame Definability

39.1 Introduction
One question that interests modal logicians is the relationship between the
accessibility relation and the truth of certain formulas in models with that ac-
cessibility relation. For instance, suppose the accessibility relation is reflexive,
i.e., for every w ∈ W, Rww. In other words, every world is accessible from
itself. That means that when ϕ is true at a world w, w itself is among the
accessible worlds at which ϕ must therefore be true. So, if the accessibility
relation R of M is reflexive, then whatever world w and formula ϕ we take,
ϕ → ϕ will be true there (in other words, the schema p → p and all its
substitution instances are true in M).
The converse, however, is false. It’s not the case, e.g., that if p → p is
true in M, then R is reflexive. For we can easily find a non-reflexive model M
where p → p is true at all worlds: take the model with a single world w,
not accessible from itself, but with w ∈ V ( p). By picking the truth value of p
suitably, we can make ϕ → ϕ true in a model that is not reflexive.
The solution is to remove the variable assignment V from the equation. If
we require that p → p is true at all worlds in M, regardless of which worlds
are in V ( p), then it is necessary that R is reflexive. For in any non-reflexive
model, there will be at least one world w such that not Rww. If we set V ( p) =
W \ {w}, then p will be true at all worlds other than w, and so at all worlds
accessible from w (since w is guaranteed not to be accessible from w, and w is
the only world where p is false). On the other hand, p is false at w, so p → p
is false at w.
This suggests that we should introduce a notation for model structures
without a valuation: we call these frames. A frame F is simply a pair hW, Ri
consisting of a set of worlds with an accessibility relation. Every model hW, R, V i
is then, as we say, based on the frame hW, Ri. Conversely, a frame determines
the class of models based on it; and a class of frames determines the class of
models which are based on any frame in the class. And we can define F  ϕ,

522
39.2. PROPERTIES OF ACCESSIBILITY RELATIONS

If R is . . . then . . . is true in M:
serial: ∀u∃vRuv p → ♦p (D)
reflexive: ∀wRww p → p (T)
symmetric: p → ♦p (B)
∀u∀v( Ruv → Rvu)
transitive: p → p (4)
∀u∀v∀w(( Ruv ∧ Rvw) → Ruw)
euclidean: ♦p → ♦p (5)
∀w∀u∀v(( Rwu ∧ Rwv) → Ruv
Table 39.1: Five correspondence facts.

the notion of a formula being valid in a frame as: M ϕ for all M based on F.
With this notation, we can establish correspondence relations between for-
mulas and classes of frames: e.g., F  p → p if, and only if, F is reflexive.

39.2 Properties of Accessibility Relations


Many modal formulas turn out to be characteristic of simple, and even famil-
iar, properties of the accessibility relation. In one direction, that means that
any model that has a given property makes a corresponding formula (and all
its substitution instances) true. We begin with five classical examples of kinds
of accessibility relations and the formulas the truth of which they guarantee.

Theorem 39.1. Let M = hW, R, V i be a model. If R has the property on the left side
of ??, every instance of the formula on the right side is true in M.

Proof. Here is the case for B: to show that the schema is true in a model we
need to show that all of its instances are true all worlds in the model. So
let ϕ → ♦ϕ be a given instance of B, and let w ∈ W be an arbitrary world.
Suppose the antecedent ϕ is true at w, in order to show that ♦ϕ is true at
w. So we need to show that ♦ϕ is true at all w0 accessible from w. Now, for
any w0 such that Rww0 we have, using the hypothesis of symmetry, that also
Rw0 w (see ??). Since M, w ϕ, we have M, w0 ♦ϕ. Since w0 was an arbitrary
world such that Rww0 , we have M, w ♦ϕ.
We leave the other cases as exercises.

Notice that the converse implications of ?? do not hold: it’s not true that
if a model verifies a schema, then the accessibility relation of that model has
the corresponding property. In the case of T and reflexive models, it is easy to
give an example of a model in which T itself fails: let W = {w} and V ( p) = ∅.
Then R is not reflexive, but M, w p and M, w 1 p. But here we have just
a single instance of T that fails in M, other instances, e.g., ¬ p → ¬ p are true.

Release: (None) ((None)) 523


CHAPTER 39. FRAME DEFINABILITY

w w0
ϕ ♦ϕ
♦ϕ

Figure 39.1: The argument from symmetry.

It is harder to give examples where every substitution instance of T is true in M


and M is not reflexive. But there are such models, too:

Proposition 39.2. Let M = hW, R, V i be a model such that W = {u, v}, where
worlds u and v are related by R: i.e., both Ruv and Rvu. Suppose that for all p:
u ∈ V ( p) ⇔ v ∈ V ( p). Then:

1. For all ϕ: M, u ϕ if and only if M, v ϕ (use induction on ϕ).

2. Every instance of T is true in M.

Since M is not reflexive (it is, in fact, irreflexive), the converse of ?? fails in the case
of T (similar arguments can be given for some—though not all—the other schemas
mentioned in ??).

Although we will focus on the five classical formulas D, T, B, 4, and 5, we


record in ?? a few more properties of accessibility relations. The accessibility
relation R is partially functional, if from every world at most one world is ac-
cessible. If it is the case that from every world exactly one world is accessible,
we call it functional. (Thus the functional relations are precisely those that are
both serial and partially functional). They are called “functional” because the
accessibility relation operates like a (partial) function. A relation is weakly
dense if whenever Ruv, there is a w “between” u and v. So weakly dense rela-
tions are in a sense the opposite of transitive relations: in a transitive relation,
whenever you can reach v from u by a detour via w, you can reach v from u
directly; in a weakly dense relation, whenever you can reach v from u directly,
you can also reach it by a detour via some w. A relation is weakly directed if
whenever you can reach worlds u and v from some world w, you can reach
a single world t from both u and v—this is sometimes called the “diamond
property” or “confluence.”

39.3 Frames
Definition 39.3. A frame is a pair F = hW, Ri where W is a non-empty set of
worlds and R a binary relation on W. A model M is based on a frame F =
hW, Ri if and only if M = hW, R, V i.

524 Release: (None) ((None))


39.4. FRAME DEFINABILITY

If R is . . . then . . . is true in M:
partially functional:
♦p → p
∀w∀u∀v(( Rwu ∧ Rwv) → u = v)
functional: ∀w∃v∀u( Rwu ↔ u = v) ♦p ↔ p
weakly dense:
p → p
∀u∀v( Ruv → ∃w( Ruw ∧ Rwv))
weakly connected:
(( p ∧ p) → q) ∨
∀w∀u∀v(( Rwu ∧ Rwv) → (L)
((q ∧ q) → p)
( Ruv ∨ u = v ∨ Rvu))
weakly directed:
∀w∀u∀v(( Rwu ∧ Rwv) → ♦p → ♦p (G)
∃t( Rut ∧ Rvt)
Table 39.2: Five more correspondence facts.

Definition 39.4. If F is a frame, we say that ϕ is valid in F, F  ϕ, if M ϕ for


every model M based on F.
If F is a class of frames, we say ϕ is valid in F , F  ϕ, iff F  ϕ for every
frame F ∈ F .

The reason frames are interesting is that correspondence between schemas


and properties of the accessibility relation R is at the level of frames, not of
models. For instance, although T is true in all reflexive models, not every model
in which T is true is reflexive. However, it is true that not only is T valid on all
reflexive frames, also every frame in which T is valid is reflexive.
Remark 6. Validity in a class of frames is a special case of the notion of validity
in a class of models: F  ϕ iff C  ϕ where C is the class of all models based
on a frame in F .
Obviously, if a formula or a schema is valid, i.e., valid with respect to the
class of all models, it is also valid with respect to any class F of frames.

39.4 Frame Definability


Even though the converse implications of ?? fail, they hold if we replace “model”
by “frame”: for the properties considered in ??, it is true that if a formula is
valid in a frame then the accessibility relation of that frame has the correspond-
ing property. So, the formulas considered define the classes of frames that have
the corresponding property.

Definition 39.5. If C is a class of frames, we say ϕ defines C iff F  ϕ for all and
only frames F ∈ C .

We now proceed to establish the full definability results for frames.

Release: (None) ((None)) 525


CHAPTER 39. FRAME DEFINABILITY

Theorem 39.6. If the formula on the right side of ?? is valid in a frame F, then F has
the property on the left side.

Proof. 1. Suppose D is valid in F = hW, Ri, i.e., F  p → ♦p. Let M =


hW, R, V i be a model based on F, and w ∈ W. We have to show that there
is a v such that Rwv. Suppose not: then both M ϕ and M, w 1 ♦ϕ
for any ϕ, including p. But then M, w 1 p → ♦p, contradicting the
assumption that F  p → ♦p.

2. Suppose T is valid in F, i.e., F  p → p. Let w ∈ W be an arbitrary


world; we need to show Rww. Let u ∈ V ( p) if and only if Rwu (when q
is other than p, V (q) is arbitrary, say V (q) = ∅). Let M = hW, R, V i. By
construction, for all u such that Rwu: M, u p, and hence M, w p.
But by hypothesis p → p is true at w, so that M, w p, but by definition
of V this is possible only if Rww.

3. We prove the contrapositive: Suppose F is not symmetric, we show that


B, i.e., p → ♦p is not valid in F = hW, Ri. If F is not symmetric, there
are u, v ∈ W such that Ruv but not Rvu. Define V such that w ∈ V ( p) if
and only if not Rvw (and V is arbitrary otherwise). Let M = hW, R, V i.
Now, by definition of V, M, w p for all w such that not Rvw, in par-
ticular, M, u p since not Rvu. Also, since Rvw iff p ∈ / V (w), there is
no w such that Rvw and M, w p, and hence M, v 1 ♦p. Since Ruv, also
M, u 1 ♦p. It follows that M, u 1 p → ♦p, and so B is not valid in F.

4. Suppose 4 is valid in F = hW, Ri, i.e., F  p → p, and let u, v,


w ∈ W be arbitrary worlds such that Ruv and Rvw; we need to show
that Ruw. Define V such that z ∈ V ( p) if and only if Ruz (and V is
arbitrary otherwise). Let M = hW, R, V i. By definition of V, M, z p
for all z such that Ruz, and hence M, u p. But by hypothesis 4,
p → p, is true at u, so that M, u p. Since Ruv and Rvw, we
have M, w p, but by definition of V this is possible only if Ruw, as
desired.

5. We proceed contrapositively, assuming that the frame F = hW, Ri is not


euclidean, and show that it falsifies 5, i.e., i.e., F 2 ♦p → ♦p. Suppose
there are worlds u, v, w ∈ W such that Rwu and Rwv but not Ruv.
Define V such that for all worlds z, z ∈ V ( p) if and only if it is not the
case that Ruz. Let M = hW, R, V i. Then by hypothesis M, v p and
since Rwv also M, w ♦p. However, there is no world y such that Ruy
and M, y p so M, u 1 ♦p. Since Rwu, it follows that M, w 1 ♦p, so
that 5, ♦p → ♦p, fails at w.

526 Release: (None) ((None))


39.4. FRAME DEFINABILITY

You’ll notice a difference between the proof for D and the other cases: no
mention was made of the valuation V. In effect, we proved that if M D then
M is serial. So D defines the class of serial models, not just frames.

Corollary 39.7. Any model where D is true is serial.

Corollary 39.8. Each formula on the right side of ?? defines the class of frames which
have the property on the left side.

Proof. In ??, we proved that if a model has the property on the left, the formula
on the right is true in it. Thus, if a frame F has the property on the left, the
formula on the left is valid in F. In ??, we proved the converse implications: if
a formula on the right is valid in F, F has the property on the left.

?? also shows that the properties can be combined: for instance if both
B and 4 are valid in F then the frame is both symmetric and transitive, etc.
Many important modal logics are characterized as the set of formulas valid
in all frames that combine some frame properties, and so we can characterize
them as the set of formulas valid in all frames in which the corresponding
defining formulas are valid. For instance, the classical system S4 is the set of
all formulas valid in all reflexive and transitive frames, i.e., in all those where
both T and 4 are valid. S5 is the set of all formulas valid in all reflexive,
symmetric, and euclidean frames, i.e., all those where all of T, B, and 5 are
valid.
Logical relationships between properties of R in general correspond to re-
lationships between the corresponding defining formulas. For instance, every
reflexive relation is serial; hence, whenever T is valid in a frame, so is D. (Note
that this relationship is not that of entailment. It is not the case that whenever
M, w T then M, w D.) We record some such relationships.

Proposition 39.9. Let R be a binary relation on a set W; then:

1. If R is reflexive, then it is serial.

2. If R is symmetric, then it is transitive if and only if it is euclidean.

3. If R is symmetric or euclidean then it is weakly directed (it has the “diamond


property”).

4. If R is euclidean then it is weakly connected.

5. If R is functional then it is serial.

Release: (None) ((None)) 527


CHAPTER 39. FRAME DEFINABILITY

39.5 First-order Definability


We’ve seen that a number of properties of accessibility relations of frames
can be defined by modal formulas. For instance, symmetry of frames can
be defined by the formula B, p → ♦p. The conditions we’ve encountered
so far can all be expressed by first-order formulas in a language involving
a single two-place predicate symbol. For instance, symmmetry is defined
by ∀ x ∀y ( Q( x, y) → Q(y, x )) in the sense that a first-order structure M with
|M| = W and QM = R satisfies the preceding formula iff R is symmetric. This
suggests the following definition:
Definition 39.10. A class C of frames is first-order definable if there is a sen-
tence ϕ in the first-order language with a single two-place predicate symbol P
such that F = hW, Ri ∈ C iff M  ϕ in the first-order structure M with
|M| = W and QM = R.
It turns out that the properties and modal formulas that define them con-
sidered so far are exceptional. Not every formula defines a first-order de-
finable class of frames, and not every first-order definable class of frames is
definable by a modal formula.
A counterexample to the first is given by the Löb formula:

(p → p) → p. (W)

W defines the class of transitive and converse well-founded frames. A relation


is well-founded if there is no infinite sequence w1 , w2 , . . . such that Rw2 w1 ,
Rw3 w2 , . . . . For instance, the relation < on N is well-founded, whereas the
relation < on Z is not. A relation is converse well-founded iff its converse is
well-founded. So converse well-founded relations are those where there is no
infinite sequence w1 , w2 , . . . such that Rw1 w2 , Rw2 w3 , . . . .
There is, however, no first-order formula defining transitive converse well-
founded relations. For suppose M  β iff R = QM is transitive converse
well-founded. Let ϕn be the formula

( Q( a1 , a2 ) ∧ · · · ∧ Q( an−1 , an ))
Now consider the set of formulas

Γ = { β, ϕ1 , ϕ2 , . . . }.

Every finite subset of Γ is satisfiable: Let k be largest such that ϕk is in the


M
subset, |Mk | = {1, . . . , k}, ai k = i, and PMk =<. Since < on {1, . . . , k} is
transitive and converse well-founded, Mk  β. Mk  ϕi by construction, for
all i ≤ k. By the Compactness Theorem for first-order logic, Γ is satisfiable in
some structure M. By hypothesis, since M  β, the relation QM is converse
well-founded. But clearly, aM M
1 , a2 , . . . would form an infinite sequence of the
kind ruled out by converse well-foundedness.

528 Release: (None) ((None))


39.6. EQUIVALENCE RELATIONS AND S5

A counterexample to the second claim is given by the property of univer-


sality: for every u and v, Ruv. Universal frames are first-order definable by
the formula ∀ x ∀y Q( x, y). However, no modal formula is valid in all and only
the universal frames. This is a consequence of a result that is independently
interesting: the formulas valid in universal frames are exactly the same as
those valid in reflexive, symmetric, and transitive frames. There are reflexive,
symmetric, and transitive frames that are not universal, hence every formula
valid in all universal frames is also valid in some non-universal frames.

39.6 Equivalence Relations and S5


The modal logic S5 is characterized as the set of formulas valid on all univer-
sal frames, i.e., every world is accessible from every world, including itself. In
such a scenarion,  corresponds to necessity and ♦ to possibility: ϕ is true
if ϕ is true at every world, and ♦ϕ is true if ϕ is true at some world. It turns
out that S5 can also be characterized as the formulas valid on all reflexive,
symmetric, and transitive frames, i.e., on all equivalence relations.
Definition 39.11. A binary relation R on W is an equivalence relation if and only
if it is reflexive, symmetric and transitive. A relation R on W is universal if and
only if Ruv for all u, v ∈ W.
Since T, B, and 4 characterize the reflexive, symmetric, and transitive frames,
the frames where the accessibility relation is an equivalence relation are ex-
actly those in which all four formulas are valid. It turns out that the equiv-
alence relations can also be characterized by other combinations of formu-
las, since the conditions with which we’ve defined equivalence relations are
equivalent to combinations of other familiar conditions on R.
Proposition 39.12. The following are equivalent:
1. R is an equivalence relation;
2. R is reflexive and euclidean;
3. R is serial, symmetric, and euclidean;
4. R is serial, symmetric, and transitive.

Proof. Exercise.

?? is the semantic counterpart to ??, in that it gives equivalent characteri-


zation of the modal logic of frames over which R is an equivalence (the logic
traditionally referred to as S5).
What is the relationship between universal and equivalence relations? Al-
though every universal relation is an equivalence relation, clearly not every
equivalence relation is universal. However, the formulas valid on all univer-
sal relations are exactly the same as those valid on all equivalence relations.

Release: (None) ((None)) 529


CHAPTER 39. FRAME DEFINABILITY

[w]

[z]

[u]
[v]

Figure 39.2: A partition of W in equivalence classes.

Proposition 39.13. Let R be an equivalence relation, and for each w ∈ W define the
equivalence class of w as the set [w] = {w0 ∈ W : Rww0 }. Then:

1. w ∈ [w];

2. R is universal on each equivalence class [w];

3. The collection of equivalence classes partitions W into mutually exclusive and


jointly exhaustive subsets.

Proposition 39.14. A formula ϕ is valid in all frames F = hW, Ri where R is an


equivalence relation, if and only if it is valid in all frames F = hW, Ri where R is
universal. Hence, the logic of universal frames is just S5.

Proof. It’s immediate to verify that a universal relation R on W is an equiva-


lence. Hence, if ϕ is valid in all frames where R is an equivalence it is valid in
all universal frames. For the other direction, we argue contrapositively: sup-
pose ψ is a formula that fails at a world w in a model M = hW, R, V i based
on a frame hW, Ri, where R is an equivalence on W. So M, w 1 ψ. Define a
model M0 = hW 0 , R0 , V 0 i as follows:

1. W 0 = [w];

2. R0 is universal on W 0 ;

3. V 0 ( p) = V ( p) ∩ W 0 .

(So the set W 0 of worlds in M0 is represented by the shaded area in ??.) It is


easy to see that R and R0 agree on W 0 . Then one can show by induction on
formulas that for all w0 ∈ W 0 : M0 , w0 ϕ if and only if M, w0 ϕ for each
ϕ (this makes sense since W 0 ⊆ W). In particular, M0 , w 1 ψ, and ψ fails in a
model based on a universal frame.

530 Release: (None) ((None))


39.7. SECOND-ORDER DEFINABILITY

39.7 Second-order Definability


Not every frame property definable by modal formulas is first-order defin-
able. However, if we allow quantification over one-place predicates (i.e., monadic
second-order quantification), we define all modally definable frame proper-
ties. The trick is to exploit a systematic way in which the conditions under
which a modal formula is true at a world are related to first-order formulas.
This is the so-called standard translation of modal formulas into first-order
formulas in a language containing not just a two-place predicate symbol Q
for the accessibility relation, but also a one-place predicate symbol Pi for the
propositional variables pi occurring in ϕ.

Definition 39.15. The standard translation STx ( ϕ) is inductively defined as fol-


lows:

1. ϕ ≡ ⊥: STx ( ϕ) = ⊥.

2. ϕ ≡ pi : STx ( ϕ) = Pi ( x ).

3. ϕ ≡ ¬ψ: STx ( ϕ) = ¬STx (ψ).

4. ϕ ≡ (ψ ∧ χ): STx ( ϕ) = (STx (ψ) ∧ STx (χ)).

5. ϕ ≡ (ψ ∨ χ): STx ( ϕ) = (STx (ψ) ∨ STx (χ)).

6. ϕ ≡ (ψ → χ): STx ( ϕ) = (STx (ψ) → STx (χ)).

7. ϕ ≡ ψ: STx ( ϕ) = ∀y ( Q( x, y) → STy (ψ)).

8. ϕ ≡ ♦ψ: STx ( ϕ) = ∃y ( Q( x, y) ∧ STy (ψ)).

For instance, STx (p → p) is ∀y ( Q( x, y) → P(y)) → P( x ). Any structure


for the language of STx ( ϕ) requires a domain, a two-place relation assigned
to Q, and subsets of the domain assigned to the one-place predicate sym-
bols Pi . In other words, the components of such a structure are exactly those of
a model for ϕ: the domain is the set of worlds, the two-place relation assigned
to Q is the accessibility relation, and the subsets assigned to Pi are just the as-
signments V (pi ). It won’t surprise that satisfaction of ϕ in a modal model and
of STx ( ϕ) in the corresponding structure agree:

Proposition 39.16. Let M = hW, R, V i, M0 be the first-order structure with |M0 | =


0
W, QM = R, and PiM = V (pi ), and s( x ) = w. Then

M, w ϕ iff M0 , s  STx ( ϕ)

Proof. By induction on ϕ.

Release: (None) ((None)) 531


CHAPTER 39. FRAME DEFINABILITY

Proposition 39.17. Suppose ϕ is a modal formula and F = hW, Ri is a frame. Let


0
F0 be the first-order structure with |F0 | = W and QF = R, and let ϕ0 be the second-
order formula
∀ X1 . . . ∀ Xn ∀ x STx ( ϕ)[ X1 /P1 , . . . , Xn /Pn ],
where P1 , . . . , Pn are all one-place predicate symbols in STx ( ϕ). Then

F  ϕ iff F0  ϕ0
0
Proof. F0  ϕ0 iff for every structure M0 where PiM ⊆ W for i = 1, . . . , n, and
for every s with s( x ) ∈ W, M0 , s  STx ( ϕ). By ??, that is the case iff for all
models M based on F and every world w ∈ W, M, w ϕ, i.e., F  ϕ.

Definition 39.18. A class C of frames is second-order definable if there is a sen-


tence ϕ in the second-order language with a single two-place predicate sym-
bol P and quantifiers only over monadic set variables such that F = hW, Ri ∈
C iff M  ϕ in the structure M with |M| = W and PM = R.

Corollary 39.19. If a class of frames is definable by a formula ϕ, the corresponding


class of accessibility relations is definable by a monadic second-order sentence.

Proof. The monadic second-order sentence ϕ0 of the preceding proof has the
required property.

As an example, consider again the formula p → p. It defines reflexivity.


Reflexivity is of course first-order definable by the sentence Q( x, x ). But it is
also definable by the monadic second-order sentence

∀ X ∀ x (∀y ( Q( x, y) → X (y)) → X ( x )).

This means, of course, that the two sentences are equivalent. Here’s how you
might convince yourself of this directly: First suppose the second-order sen-
tence is true in a structure M. Since x and X is universally quantified, the
remainder must hold for any x ∈ W and set X ⊆ W, e.g, the set {z : Rxz}
where R = QM . So, for any s with s( x ) ∈ W and s( X ) = {z : Rzx } we have
M  ∀y ( Q( x, y) → X (y)) → X ( x ). But by the way we’ve picked s( X ) that
means M, s  ∀y ( Q( x, y) → Q( x, y)) → Q( x, x ), which is equivalent to Q( x, x )
since the antecedent is valid. Since s( x ) is arbitrary, we have M  ∀ x Q( x, x ).
Now suppose that M  Q( x, x ) and show that M  ∀ X ∀ x (∀y ( Q( x, y) →
X (y)) → X ( x )). Pick any assignment s, and assume M, s  ∀y ( Q( x, y) →
X (y)). Let s0 be the y-variant of s with s0 (y) = x; we have M, s0  Q( x, y) →
X (y)), i.e., M, s  Q( x, x ) → X ( x )). Since M  ∀ x Q( x, x ), the antecedent is
true, and we have M, s  X ( x ), which is what we needed to show.
Since some definable classes of frames are not first-order definable, not
every monadic-second order sentence of the form ϕ0 is equivalent to a first-
order sentence. There is no effective method to decide which ones are.

532 Release: (None) ((None))


39.7. SECOND-ORDER DEFINABILITY

Problems
Problem 39.1. Complete the proof of ??

Problem 39.2. Prove the claims in ??.

Problem 39.3. Let M = hW, R, V i be a model. Show that if R satisfies the left-
hand properties of ??, every instance of the corresponding right-hand formula
is true in M.

Problem 39.4. Show that if the formula on the right side of ?? is valid in a
frame F, then F has the property on the left side. To do this, consider a frame
that does not satisfy the property on the left, and define a suitable V such that
the formula on the right is false at some world.

Problem 39.5. Prove ??.

Problem 39.6. Prove ?? by showing:

1. If R is symmetric and transitive, it is euclidean.

2. If R is reflexive, it is serial.

3. If R is reflexive and euclidean, it is symmetric.

4. If R is symmetric and euclidean, it is transitive.

5. If R is serial, symmetric, and transitive, it is reflexive.

Explain why this suffices for the proof that the conditions are equivalent.

Release: (None) ((None)) 533


Chapter 40

Axiomatic Derivations

40.1 Introduction
We have a semantics for the basic modal language in terms of modal models,
and a notion of a formula being valid—true at all worlds in all models—or
valid with respect to some class of models or frames—true at all worlds in
all models in the class, or based on the frame. Logic usually connects such
semantic characterizations of validity with a proof-theoretic notion of deriv-
ability. The aim is to define a notion of derivability in some system such that
a formula is derivable iff it is valid.
The simplest and historically oldest derivation systems are so-called Hilbert-
type or axiomatic derivation systems. Hilbert-type derivation systems for
many modal logics are relatively easy to construct: they are simple as ob-
jects of metatheoretical study (e.g., to prove soundness and completeness).
However, they are much harder to use to prove formulas in than, say, natural
deduction systems.
In Hilbert-type derivation systems, a derivation of a formula is a sequence
of formulas leading from certain axioms, via a handful of inference rules, to
the formula in question. Since we want the derivation system to match the
semantics, we have to guarantee that the set of derivable formulas are true
in all models (or true in all models in which all axioms are true). We’ll first
isolate some properties of modal logics that are necessary for this to work:
the “normal” modal logics. For normal modal logics, there are only two in-
ference rules that need to be assumed: modus ponens and necessitation. As
axioms we take all (substitution instances) of tautologies, and, depending on
the modal logic we deal with, a number of modal axioms. Even if we are just
interested in the class of all models, we must also count all substitution in-
stances of K and Dual as axioms. This alone generates the minimal normal
modal logic K.

Definition 40.1. The rule of modus ponens is the inference schema

534
40.2. NORMAL MODAL LOGICS

ϕ ϕ→ψ
MP
ψ
We say a formula ψ follows from formulas ϕ, χ by modus ponens iff χ ≡ ϕ → ψ.

Definition 40.2. The rule of necessitation is the inference schema


ϕ
NEC

We say the formula ψ follows from the formulas ϕ by necessitation iff ψ ≡ ϕ.

Definition 40.3. A derivation from a set of axioms Σ is a sequence of formulas


ψ1 , ψ2 , . . . , ψn , where each ψi is either

1. a substitution instance of a tautology, or

2. a substitution instance of a formula in Σ, or

3. follows from two formulas ψj , ψk with j, k < i by modus ponens, or

4. follows from a formula ψj with j < i by necessitation.

If there is such a derivation with ψn ≡ ϕ, we say that ϕ is derivable from Σ, in


symbols Σ ` ϕ.

With this definition, it will turn out that the set of derivable formulas forms
a normal modal logic, and that any derivable formula is true in every model
in which every axiom is true. This property of derivations is called soundness.
The converse, completeness, is harder to prove.

40.2 Normal Modal Logics


Not every set of modal formulas can easily be characterized as those formulas
derivable from a set of axioms. We want modal logics to be well-behaved.
First of all, everything we can derive in classical propositional logic should
still be derivable, of course taking into account that the formulas may now
contain also  and ♦. To this end, we require that a modal logic contain all
tautological instances and be closed under modus ponens.

Definition 40.4. A modal logic is a set Σ of modal formulas which

1. contains all tautologies, and

2. is closed under substitution, i.e., if ϕ ∈ Σ, and θ1 , . . . , θn are formulas,


then
ϕ[θ1 /p1 , . . . , θn /pn ] ∈ Σ,

3. is closed under modus ponens, i.e., if ϕ and ϕ → ψ ∈ Σ, then ψ ∈ Σ.

Release: (None) ((None)) 535


CHAPTER 40. AXIOMATIC DERIVATIONS

In order to use the relational semantics for modal logics, we also have to re-
quire that all formulas valid in all modal models are included. It turns out that
this requirement is met as soon as all instances of K and DUAL are derivable,
and whenever a formula ϕ is derivable, so is ϕ. A modal logic that satisfies
these conditions is called normal. (Of course, there are also non-normal modal
logics, but the usual relational models are not adequate for them.)

Definition 40.5. A modal logic Σ is normal if it contains

( p → q) → (p → q), (K)


♦p ↔ ¬¬ p (DUAL)

and is closed under necessitation, i.e., if ϕ ∈ Σ, then ϕ ∈ Σ.

Observe that while tautological implication is “fine-grained” enough to


preserve truth at a world, the rule NEC only preserves truth in a model (and
hence also validity in a frame or in a class of frames).

Proposition 40.6. Every normal modal logic is closed under rule RK,

ϕ 1 → ( ϕ 2 → · · · ( ϕ n −1 → ϕ n ) · · · )
RK
ϕ1 → (ϕ2 → · · · (ϕn−1 → ϕn ) · · · ).
Proof. By induction on n: If n = 1, then the rule is just NEC, and every normal
modal logic is closed under NEC.
Now suppose the result holds for n − 1; we show it holds for n.
Assume

ϕ 1 → ( ϕ 2 → · · · ( ϕ n −1 → ϕ n ) · · · ) ∈ Σ

By the induction hypothesis, we have

ϕ1 → (ϕ2 → · · · ( ϕn−1 → ϕn ) · · · ) ∈ Σ

Since Σ is a normal modal logic, it contains all instances of K, in particular

( ϕn−1 → ϕn ) → (ϕn−1 → ϕn ) ∈ Σ

Using modus ponens and suitable tautological instances we get

ϕ1 → (ϕ2 → · · · (ϕn−1 → ϕn ) · · · ) ∈ Σ.

Proposition 40.7. Every normal modal logic Σ contains ¬♦⊥.

Proposition 40.8. Let ϕ1 , . . . , ϕn be formulas. Then there is a smallest modal logic


Σ containing all instances of ϕ1 , . . . , ϕn .

536 Release: (None) ((None))


40.3. DERIVATIONS AND MODAL SYSTEMS

Proof. Given ϕ1 , . . . , ϕn , define Σ as the intersection of all normal modal log-


ics containing all instances of ϕ1 , . . . , ϕn . The intersection is non-empty as
Frm(L), the set of all formulas, is such a modal logic.

Definition 40.9. The smallest normal modal logic containing ϕ1 , . . . , ϕn is


called a modal system and denoted by Kϕ1 . . . ϕn . The smallest normal modal
logic is denoted by K.

40.3 Derivations and Modal Systems


We first define what a derivation is for normal modal logics. Roughly, a deriva-
tion is a sequence of formulas in which every element is either (a substitution
instance of) one of a number of axioms, or follows from previous elements by
one of a few inference rules. For normal modal logics, all instances of tau-
tologies, K, and DUAL count as axioms. This results in the modal system K,
the smallest normal modal logic. We may wish to add additional axioms to
obtain other systems, however. The rules are always modus ponens MP and
necessitation NEC.
Definition 40.10. Given a modal system Kϕ1 . . . ϕn and a formula ψ we say
that ψ is derivable in Kϕ1 . . . ϕn , written Kϕ1 . . . ϕn ` ψ, if and only if there
are formulas χ1 , . . . , χk such that χk = ψ and each χi is either a tautologi-
cal instance, or an instance of one of K, DUAL, ϕ1 , . . . , ϕn , or it follows from
previous formulas by means of the rules MP or NEC.
The following proposition allows us to show that ψ ∈ Σ by exhibiting a
Σ-proof of ψ.
Proposition 40.11. Kϕ1 . . . ϕn = {ψ : Kϕ1 . . . ϕn ` ψ}.

Proof. We use induction on the length of derivations to show that {ψ : Kϕ1 . . . ϕn `


ψ} ⊆ Kϕ1 . . . ϕn .
If the derivation of ψ has length 1, it contains a single formula. That for-
mula cannot follow from previous formulas by MP or NEC, so must be a tau-
tological instance, an instance of K, DUAL, or an instance of one of ϕ1 , . . . , ϕn .
But Kϕ1 . . . ϕn contains these as well, so ψ ∈ Kϕ1 . . . ϕn .
If the derivation of ψ has length > 1, then ψ may in addition be obtained
by MP or NEC from formulas not occurring as the last line in the derivation.
If ψ follows from χ and χ → ψ (by MP), then χ and χ → ψ ∈ Kϕ1 . . . ϕn by
induction hypothesis. But every modal logic is closed under modus ponens,
so ψ ∈ Kϕ1 . . . ϕn . If ψ ≡ χ follows from χ by NEC, then χ ∈ Kϕ1 . . . ϕn by
induction hypothesis. But every normal modal logic is closed under NEC, so
ψ ∈ Kϕ1 . . . ϕn .
The converse inclusion follows by showing that Σ = {ψ : Kϕ1 . . . ϕn ` ψ}
is a normal modal logic containing all the instances of ϕ1 , . . . , ϕn , and the
observation that Kϕ1 . . . ϕn is, by definition, the smallest such logic.

Release: (None) ((None)) 537


CHAPTER 40. AXIOMATIC DERIVATIONS

1. Every tautology ψ is a tautological instance, so Kϕ1 . . . ϕn ` ψ, so Σ


contains all tautologies.

2. If Kϕ1 . . . ϕn ` χ and Kϕ1 . . . ϕn ` χ → ψ, then Kϕ1 . . . ϕn ` ψ: Combine


the derivation of χ with that of χ → ψ, and add the line ψ. The last line
is justified by MP. So Σ is closed under modus ponens.

3. If ψ has a derivation, then every substitution instance of ψ also has a


derivation: apply the substitution to every formula in the derivation.
(Exercise: prove by induction on the length of derivations that the re-
sult is also a correct derivation). So Σ is closed under uniform substitu-
tion. (We have now established that Σ satisfies all conditions of a modal
logic.)

4. We have Kϕ1 . . . ϕn ` K, so K ∈ Σ.

5. We have Kϕ1 . . . ϕn ` DUAL, so DUAL ∈ Σ.

6. If Kϕ1 . . . ϕn ` χ, the additional line χ is justified by NEC. Conse-


quently, Σ is closed under NEC. Thus, Σ is normal.

40.4 Proofs in K
In order to practice proofs in the smallest modal system, we show the valid
formulas on the left-hand side of ?? can all be given K-proofs.

Proposition 40.12. K ` ϕ → (ψ → ϕ)

Proof.

1. ϕ → (ψ → ϕ) TAUT
2. ( ϕ → (ψ → ϕ)) NEC , 1
3. ( ϕ → (ψ → ϕ)) → (ϕ → (ψ → ϕ)) K
4. ϕ → (ψ → ϕ) MP , 2, 3

Proposition 40.13. K ` ( ϕ ∧ ψ) → (ϕ ∧ ψ)

Proof.

538 Release: (None) ((None))


40.4. PROOFS IN K

1. ( ϕ ∧ ψ) → ϕ TAUT
2. (( ϕ ∧ ψ) → ϕ) NEC
3. (( ϕ ∧ ψ) → ϕ) → (( ϕ ∧ ψ) → ϕ) K
4. ( ϕ ∧ ψ) → ϕ MP , 2, 3
5. ( ϕ ∧ ψ) → ψ TAUT
6. (( ϕ ∧ ψ) → ψ) NEC
7. (( ϕ ∧ ψ) → ψ) → (( ϕ ∧ ψ) → ψ) K
8. ( ϕ ∧ ψ) → ψ MP , 6, 7
9. (( ϕ ∧ ψ) → ϕ) →
((( ϕ ∧ ψ) → ϕ) →
(( ϕ ∧ ψ) → (ϕ ∧ ψ))) TAUT
10. (( ϕ ∧ ψ) → ϕ) →
(( ϕ ∧ ψ) → (ϕ ∧ ψ)) MP , 4, 9
11. ( ϕ ∧ ψ) → (ϕ ∧ ψ) MP , 4, 10.
Note that the formula on line 9 is an instance of the tautology

( p → q) → (( p → r ) → ( p → (q ∧ r ))).

Proposition 40.14. K ` (ϕ ∧ ψ) → ( ϕ ∧ ψ)

Proof.

1. ϕ → (ψ → ( ϕ ∧ ψ)) TAUT
2. ( ϕ → (ψ → ( ϕ ∧ ψ))) NEC , 1
3. ( ϕ → (ψ → ( ϕ ∧ ψ))) → (ϕ → (ψ → ( ϕ ∧ ψ))) K
4. ϕ → (ψ → ( ϕ ∧ ψ)) MP , 2, 3
5. (ψ → ( ϕ ∧ ψ)) → (ψ → ( ϕ ∧ ψ)) K
6. (ϕ → (ψ → ( ϕ ∧ ψ))) →
((ψ → ( ϕ ∧ ψ)) → (ψ → ( ϕ ∧ ψ))) →
(ϕ → (ψ → ( ϕ ∧ ψ)))) TAUT
7. ((ψ → ( ϕ ∧ ψ)) → (ψ → ( ϕ ∧ ψ))) →
(ϕ → (ψ → ( ϕ ∧ ψ))) MP , 4, 6
8. ϕ → (ψ → ( ϕ ∧ ψ))) MP , 5, 7
9. (ϕ → (ψ → ( ϕ ∧ ψ)))) →
((ϕ ∧ ψ) → ( ϕ ∧ ψ)) TAUT
10. (ϕ ∧ ψ) → ( ϕ ∧ ψ) MP , 8, 9
The formulas on lines 6 and 9 are instances of the tautologies

( p → q) → ((q → r ) → ( p → r ))
( p → (q → r )) → (( p ∧ q) → r )

Proposition 40.15. K ` ¬p → ♦¬ p

Release: (None) ((None)) 539


CHAPTER 40. AXIOMATIC DERIVATIONS

Proof.
1. ♦¬ p ↔ ¬¬¬ p DUAL
2. (♦¬ p ↔ ¬¬¬ p) →
(¬¬¬ p → ♦¬ p) TAUT
3. ¬¬¬ p → ♦¬ p MP , 1, 2
4. ¬¬ p → p TAUT
5. (¬¬ p → p) NEC , 4
6. (¬¬ p → p) → (¬¬ p → p) K
7. (¬¬ p → p) MP , 5, 6
8. (¬¬ p → p) → (¬p → ¬¬¬ p) TAUT
9. ¬p → ¬¬¬ p MP , 7, 8
10. (¬p → ¬¬¬ p) →
((¬¬¬ p → ♦¬ p) → (¬p → ♦¬ p)) TAUT
11. (¬¬¬ p → ♦¬ p) → (¬p → ♦¬ p) MP , 9, 10
12. ¬p → ♦¬ p MP , 3, 11
The formulas on lines 8 and 10 are instances of the tautologies
( p → q) → (¬q → ¬ p)
( p → q) → ((q → r ) → ( p → r )).

40.5 Derived Rules


Finding and writing derivations is obviously difficult, cumbersome, and repet-
itive. For instance, very often we want to pass from ϕ → ψ to ϕ → ψ, i.e.,
apply rule RK. That requires an application of NEC, then recording the proper
instance of K, then applying MP. Passing from ϕ → ψ and ψ → χ to ϕ → χ
requires recording the (long) tautological instance
( ϕ → ψ) → ((ψ → χ) → ( ϕ → χ))
and applying MP twice. Often we want to replace a sub-formula by a formula
we know to be equivalent, e.g., ♦ϕ by ¬¬ ϕ, or ¬¬ ϕ by ϕ. So rather than
write out the actual derivation, it is more convenient to simply record why
the intermediate steps are derivable. For this purpose, let us collect some facts
about derivability.
Proposition 40.16. If K ` ϕ1 , . . . , K ` ϕn , and ψ follows from ϕ1 , . . . , ϕn by
propositional logic, then K ` ψ.

Proof. If ψ follows from ϕ1 , . . . , ϕn by propositional logic, then


ϕ1 → ( ϕ2 → · · · ( ϕ n → ψ ) . . . )
is a tautological instance. Applying MP n times gives a derivation of ψ.

540 Release: (None) ((None))


40.5. DERIVED RULES

We will indicate use of this proposition by PL.


Proposition 40.17. If K ` ϕ1 → ( ϕ2 → · · · ( ϕn−1 → ϕn ) . . . ) then K ` ϕ1 →
(ϕ2 → · · · (ϕn−1 → ϕn ) . . . ).
Proof. By induction on n, just as in the proof of ??.

We will indicate use of this proposition by RK. Let’s illustrate how these
results help establishing derivability results more easily.
Proposition 40.18. K ` (ϕ ∧ ψ) → ( ϕ ∧ ψ)

Proof.
1. K ` ϕ → (ψ → ( ϕ ∧ ψ)) TAUT
2. K ` ϕ → (ψ → ( ϕ ∧ ψ))) RK , 1
3. K ` (ϕ ∧ ψ) → ( ϕ ∧ ψ) PL , 2

Proposition 40.19. If K ` ϕ ↔ ψ and K ` χ[ ϕ/p] then K ` χ[ B/p]

Proof. Exercise.

This proposition comes in handy especially when we want to convert ♦


into  (or vice versa), or remove double negations inside a formula. For in-
stance:
Proposition 40.20. K ` ¬p → ♦¬ p

Proof.
1. K ` ♦¬ p ↔ ¬¬¬ p DUAL
2. K ` ¬¬¬ p → ♦¬ p PL , 1
3. K ` ¬p → ♦¬ p re-write p for ¬¬ p

The following proposition justifies that we can establish derivability re-


sults schematically. E.g., the previous proposition does not just establish that
K ` ¬p → ♦¬ p, but K ` ¬ϕ → ♦¬ ϕ for arbitrary ϕ.
Proposition 40.21. If ϕ is a substitution instance of ψ and K ` ψ, then K ` ϕ.

Proof. It is tedious but routine to verify (by induction on the length of the
derivation of ψ) that applying a substitution to an entire derivation also re-
sults in a correct derivation. Specifically, substitution instances of tautological
instances are themselves tautological instances, substitution instances of in-
stances of DUAL and K are themselves instances of DUAL and K, and applica-
tions of MP and NEC remain correct when substituting formulas for proposi-
tional variables in both premise(s) and conclusion.

Release: (None) ((None)) 541


CHAPTER 40. AXIOMATIC DERIVATIONS

40.6 More Proofs in K


Let’s see some more examples of derivability in K, now using the simplified
method introduced in ??.

Proposition 40.22. K ` ( ϕ → ψ) → (♦ϕ → ♦ψ)

Proof.

1. K ` ( ϕ → ψ) → (¬ψ → ¬ ϕ) PL
2. K ` ( ϕ → ψ ) → (¬ ψ → ¬ ϕ ) RK , 1
3. K ` (¬ψ → ¬ ϕ) → (¬¬ ϕ → ¬¬ψ) TAUT
4. K ` (¬ψ → ¬ ϕ) → (¬¬ ϕ → ¬¬ψ) PL , 2, 3
5. K ` ( ϕ → ψ) → (♦ϕ → ♦ψ) re-writing ♦ for ¬¬.

Proposition 40.23. K ` ϕ → (♦( ϕ → ψ) → ♦ψ)

Proof.

1. K ` ϕ → (¬ψ → ¬( ϕ → ψ)) TAUT


2. K ` ϕ → (¬ψ → ¬( ϕ → ψ)) RK , 1
3. K ` ϕ → (¬¬( ϕ → ψ) → ¬¬ψ) PL , 2
4. K ` ϕ → (♦( ϕ → ψ) → ♦ψ) re-writing ♦ for ¬¬.

Proposition 40.24. K ` (♦ϕ ∨ ♦ψ) → ♦( ϕ ∨ ψ)

Proof.

1. K ` ¬( ϕ ∨ ψ) → ¬ ϕ TAUT
2. K ` ¬( ϕ ∨ ψ) → ¬ ϕ RK , 1
3. K ` ¬¬ ϕ → ¬¬( ϕ ∨ ψ) PL , 2
4. K ` ♦ϕ → ♦( ϕ ∨ ψ) re-writing
5. K ` ♦ψ → ♦( ϕ ∨ ψ) similarly
6. K ` (♦ϕ ∨ ♦ψ) → ♦( ϕ ∨ ψ) PL , 4, 5.

Proposition 40.25. K ` ♦( ϕ ∨ ψ) → (♦ϕ ∨ ♦ψ)

Proof.

542 Release: (None) ((None))


40.7. DUAL FORMULAS

1. K ` ¬ ϕ → (¬ψ → ¬( ϕ ∨ ψ) TAUT
2. K ` ¬ ϕ → (¬ψ → ¬( ϕ ∨ ψ) RK
3. K ` ¬ ϕ → (¬¬( ϕ ∨ ψ) → ¬¬ψ)) PL , 2
4. K ` ¬¬( ϕ ∨ ψ) → (¬ ϕ → ¬¬ψ) PL , 3
5. K ` ¬¬( ϕ ∨ ψ) → (¬¬¬ψ → ¬¬ ϕ) PL , 4
6. K ` ♦( ϕ ∨ ψ) → (¬♦ψ → ♦ϕ) re-writing ♦ for ¬¬
7. K ` ♦( ϕ ∨ ψ) → (♦ψ ∨ ♦ϕ) PL , 6.

40.7 Dual Formulas


Definition 40.26. Each of the formulas T, B, 4, and 5 has a dual, denoted by a
subscripted diamond, as follows:

p → ♦p (T♦ )
♦p → p (B♦ )
♦♦p → ♦p (4♦ )
♦p → p (5♦ )

Each of the above dual formulas is obtained from the corresponding for-
mula by substituting ¬ p for p, contraposing, replacing ¬¬ by ♦, and replac-
ing ¬♦¬ by . D, i.e., ϕ → ♦ϕ is its own dual in that sense.

40.8 Proofs in Modal Systems


We now come to proofs in systems of modal logic other than K.

Proposition 40.27. The following provability results obtain:

1. KT5 ` B;

2. KT5 ` 4;

3. KDB4 ` T;

4. KB4 ` 5;

5. KB5 ` 4;

6. KT ` D.

Proof. We exhibit proofs for each.

1. KT5 ` B:

Release: (None) ((None)) 543


CHAPTER 40. AXIOMATIC DERIVATIONS

1. KT5 ` ♦ϕ → ♦ϕ 5
2. KT5 ` ϕ → ♦ϕ T♦
3. KT5 ` ϕ → ♦ϕ PL .

2. KT5 ` 4:

1. KT5 ` ♦ϕ → ♦ϕ 5 with ϕ for p


2. KT5 ` ϕ → ♦ϕ T♦ with ϕ for p
3. KT5 ` ϕ → ♦ϕ PL , 1, 2
4. KT5 ` ♦ϕ → ϕ 5♦
5. KT5 ` ♦ϕ → ϕ RK , 4
6. KT5 ` ϕ → ϕ PL , 3, 5.

3. KDB4 ` T:

1. KDB4 ` ♦ϕ → ϕ B♦
2. KDB4 ` ϕ → ♦ϕ D with ϕ for p
3. KDB4 ` ϕ → ϕ PL 1, 2
4. KDB4 ` ϕ → ϕ 4
5. KDB4 ` ϕ → ϕ PL , 1, 4.

4. KB4 ` 5:

1. KB4 ` ♦ϕ → ♦♦ϕ B with ♦ϕ for p


2. KB4 ` ♦♦ϕ → ♦ϕ 4♦
3. KB4 ` ♦♦ϕ → ♦ϕ RK , 2
4. KB4 ` ♦ϕ → ♦ϕ PL , 1, 3.

5. KB5 ` 4:

1. KB5 ` ϕ → ♦ϕ B with ϕ for p


2. KB5 ` ♦ϕ → ϕ 5♦
3. KB5 ` ♦ϕ → ϕ RK , 2
4. KB5 ` ϕ → ϕ PL , 1, 3.

6. KT ` D:

1. KT ` ϕ → ϕ T
2. KT ` ϕ → ♦ϕ T♦
3. KT ` ϕ → ♦ϕ PL , 1, 2

Definition 40.28. Following tradition, we define S4 to be the system KT4, and


S5 the system KTB4.

544 Release: (None) ((None))


40.9. SOUNDNESS

The following proposition shows that the classical system S5 has several
equivalent axiomatizations. This should not surprise, as the various combina-
tions of axioms all characterize equivalence relations (see ??).

Proposition 40.29. KTB4 = KT5 = KDB4 = KDB5.

Proof. Exercise.

40.9 Soundness
A derivation system is called sound if everything that can be derived is valid.
When considering modal systems, i.e., derivations where in addition to K we
can use instances of some formulas ϕ1 , . . . , ϕn , we want every derivable for-
mula to be true in any model in which ϕ1 , . . . , ϕn are true.

Theorem 40.30 (Soundness Theorem). If every instance of ϕ1 , . . . , ϕn is valid in


the classes of models C1 , . . . , Cn , respectively, then Kϕ1 . . . ϕn ` ψ implies that ψ is
valid in the class of models C1 ∩ · · · ∩ Cn .

Proof. By induction on length of proofs. For brevity, put C = Cn ∩ · · · ∩ Cn .

1. Induction Basis: If ψ has a proof of length 1, then it is either a tautological


instance, an instance of K, or of DUAL, or an instance of one of ϕ1 , . . . , ϕn .
In the first case, ψ is valid in C , since tautological instance are valid in
any class of models, by ??. Similarly in the second case, by ?? and ??.
Finally in the third case, since ψ is valid in Ci and C ⊆ Ci , we have that
ψ is valid in C as well.

2. Inductive step: Suppose ψ has a proof of length k > 1. If ψ is a tauto-


logical instance or an instance of one of ϕ1 , . . . , ϕn , we proceed as in the
previous step. So suppose ψ is obtained by MP from previous formulas
χ → ψ and χ. Then χ → ψ and χ have proofs of length < k, and by induc-
tive hypothesis they are valid in C . By ??, ψ is valid in C as well. Finally
suppose ψ is obtained by NEC from χ (so that ψ = χ). By inductive
hypothesis, χ is valid in C , and by ?? so is ψ.

40.10 Showing Systems are Distinct


In ?? we saw how to prove that two systems of modal logic are in fact the same
system. ?? allows us to show that two modal systems Σ and Σ0 are distinct, by
finding a formula ϕ such that Σ0 ` ϕ that fails in a model of Σ.

Proposition 40.31. KD ( KT

Release: (None) ((None)) 545


CHAPTER 40. AXIOMATIC DERIVATIONS

Proof. This is the syntactic counterpart to the semantic fact that all reflexive
relations are serial. To show KD ⊆ KT we need to see that KD ` ψ implies
KT ` ψ, which follows from KT ` D, as shown in ????. To show that the in-
clusion is proper, by Soundness (??), it suffices to exhibit a model of KD where
T, i.e., p → p, fails (an easy task left as an exercise), for then by Soundness
KD 0 p → p.

Proposition 40.32. KB 6= K4.

Proof. We construct a symmetric model where some instance of 4 fails; since


obviously the instance is derivable for K4 but not in KB, it will follow K4 *
KB. Consider the symmetric model M of ??. Since the model is symmetric,
K and B are true in M (by ?? and ??, respectively). However, M, w1 1 p →
p.
¬p p
w1 w2
p 1p
1p

Figure 40.1: A symmetric model falsifying an instance of 4.

Theorem 40.33. KTB 0 4 and KTB 0 5.

Proof. By ?? we know that all instances of T and B are true in every reflexive
symmetric model (respectively). So by soundness, it suffices to find a reflex-
ive symmetric model containing a world at which some instance of 4 fails, and
similarly for 5. We use the same model for both claims. Consider the symmet-
ric, reflexive model in ??. Then M, w1 1 p → p, so 4 fails at w1 . Similarly,
M, w2 1 ♦¬ p → ♦¬ p, so the instance of 5 with ϕ = ¬ p fails at w2 .

w1 p w2 p w3 ¬ p
p ♦¬ p
1p 1♦¬ p
1♦¬ p

Figure 40.2: The model for ??.

Theorem 40.34. KD5 6= KT4 = S4.

Proof. By ?? we know that all instances of D and 5 are true in all serial eu-
clidean models. So it suffices to find a serial euclidean model containing a

546 Release: (None) ((None))


40.11. DERIVABILITY FROM A SET OF FORMULAS

w4 ¬ p

p p
w2 w3

w1 ¬ p
p, 1p

Figure 40.3: The model for ??.

world at which some instance of 4 fails. Consider the model of ??, and notice
that M, w1 1 p → p.

40.11 Derivability from a Set of Formulas


In ?? we defined a notion of provability of a formula in a system Σ. We now
extend this notion to provability in Σ from formulas in a set Γ.

Definition 40.35. A formula ϕ is derivable in a system Σ from a set of formulas


Γ, written Γ `Σ ϕ if and only if there are ψ1 , . . . , ψn ∈ Γ such that Σ `
ψ1 → (ψ2 → · · · (ψn → ϕ) · · · ).

40.12 Properties of Derivability


Proposition 40.36. Let Σ be a modal system and Γ a set of modal formulas. The
following properties hold:

1. Monotony: If Γ `Σ ϕ and Γ ⊆ ∆ then ∆ `Σ ϕ;

2. Reflexivity: If ϕ ∈ Γ then Γ `Σ ϕ;

3. Cut: If Γ `Σ ϕ and ∆ ∪ { ϕ} `Σ ψ then Γ ∪ ∆ `Σ ψ;

4. Deduction theorem: Γ ∪ {ψ} `Σ ϕ if and only if Γ `Σ ψ → ϕ;

5. Rule T: If Γ `Σ ϕ1 and . . . and Γ `Σ ϕn and ϕ1 → ( ϕ2 → · · · ( ϕn → ψ) · · · )


is a tautological instance, then Γ `Σ ψ.

Release: (None) ((None)) 547


CHAPTER 40. AXIOMATIC DERIVATIONS

The proof is an easy exercise. Part ?? of ?? gives us that, for instance, if


Γ `Σ ϕ ∨ ψ and Γ `Σ ¬ ϕ, then Γ `Σ ψ. Also, in what follows, we write
Γ, ϕ `Σ ψ instead of Γ ∪ { ϕ} `Σ ψ.

Definition 40.37. A set Γ is deductively closed relatively to a system Σ if and


only if Γ `Σ ϕ implies ϕ ∈ Γ.

40.13 Consistency
Consistency is an important property of sets of formulas. A set of formulas is
inconsistent if a contradiction, such as ⊥, is derivable from it; and otherwise
consistent. If a set is inconsistent, its formulas cannot all be true in a model at a
world. For the completeness theorem we prove the converse: every consistent
set is true at a world in a model, namely in the “canonical model.”

Definition 40.38. A set Γ is consistent relatively to a system Σ or, as we will


say, Σ-consistent, if and only if Γ 0Σ ⊥.

So for instance, the set {( p → q), p, ¬q} is consistent relatively to propo-
sitional logic, but not K-consistent. Similarly, the set {♦p, ♦p → q, ¬q} is not
K5-consistent.

Proposition 40.39. Let Γ be a set of formulas. Then:

1. A set Γ is Σ-consistent if and only if there is some formula ϕ such that Γ 0Σ ϕ.

2. Γ `Σ ϕ if and only if Γ ∪ {¬ ϕ} is not Σ-consistent.

3. If Γ is Σ-consistent, then for any formula ϕ, either Γ ∪ { ϕ} is Σ-consistent or


Γ ∪ {¬ ϕ} is Σ-consistent.

Proof. These facts follow easily using classical propositional logic. We give the
argument for ??. Proceed contrapositively and suppose neither Γ ∪ { ϕ} nor
Γ ∪ {¬ ϕ} is Σ-consistent. Then by ??, both Γ, ϕ `Σ ⊥ and Γ, ¬ ϕ `Σ ⊥. By the
deduction theorem Γ `Σ ϕ → ⊥ and Γ `Σ ¬ ϕ → ⊥. But ( ϕ → ⊥) → ((¬ ϕ →
⊥) → ⊥) is a tautological instance, hence by ????, Γ `Σ ⊥.

Problems
Problem 40.1. Prove ??.

Problem 40.2. Find derivations in K for the following formulas:

1. ¬ p → ( p → q)

2. (p ∨ q) → ( p ∨ q)

3. ♦p → ♦( p ∨ q)

548 Release: (None) ((None))


40.13. CONSISTENCY

Problem 40.3. Prove ?? by proving, by induction on the complexity of χ, that


if K ` ϕ ↔ ψ then K ` χ[ ϕ/p] ↔ χ[ψ/p].

Problem 40.4. Show that the following derivability claims hold:

1. K ` ♦¬⊥ → (ϕ → ♦ϕ);

2. K ` ( ϕ ∨ ψ) → (♦ϕ ∨ ψ);

3. K ` (♦ϕ → ψ) → ( ϕ → ψ).

Problem 40.5. Show that for each formula ϕ in ??: K ` ϕ ↔ ϕ♦ .

Problem 40.6. Prove ??.

Problem 40.7. Give an alternative proof of ?? using a model with 3 worlds.

Problem 40.8. Provide a single reflexive transitive model showing that both
KT4 0 B and KT4 0 5.

Release: (None) ((None)) 549


Chapter 41

Completeness and Canonical


Models

41.1 Introduction
If Σ is a modal system, then the soundness theorem establishes that if Σ ` ϕ,
then ϕ is valid in any class C of models in which all instances of all formulas
in Σ are valid. In particular that means that if K ` ϕ then ϕ is true in all
models; if KT ` ϕ then ϕ is true in all reflexive models; if KD ` ϕ then ϕ is
true in all serial models, etc.
Completeness is the converse of soundness: that K is complete means that
if a formula ϕ is valid, ` ϕ, for instance. Proving completeness is a lot harder
to do than proving soundness. It is useful, first, to consider the contrapositive:
K is complete iff whenever 0 ϕ, there is a countermodel, i.e., a model M such
that M 1 ϕ. Equivalently (negating ϕ), we could prove that whenever 0
¬ ϕ, there is a model of ϕ. In the construction of such a model, we can use
information contained in ϕ. When we find models for specific formulas we
often do the same: E.g., if we want to find a countermodel to p → q, we know
that it has to contain a world where p is true and q is false. And a world
where q is false means there has to be a world accessible from it where q is
false. And that’s all we need to know: which worlds make the propositional
variables true, and which worlds are accessible from which worlds.
In the case of proving completeness, however, we don’t have a specific
formula ϕ for which we are constructing a model. We want to establish that
a model exists for every ϕ such that 0Σ ¬ ϕ. This is a minimal requirement,
since if `Σ ¬ ϕ, by soundness, there is no model for ϕ (in which Σ is true).
Now note that 0Σ ¬ ϕ iff ϕ is Σ-consistent. (Recall that Σ 0Σ ¬ ϕ and ϕ 0Σ ⊥
are equivalent.) So our task is to construct a model for every Σ-consistent
formula.
The trick we’ll use is to find a Σ-consistent set of formulas that contains ϕ,
but also other formulas which tell us what the world that makes ϕ true has to

550
41.2. COMPLETE Σ-CONSISTENT SETS

look like. Such sets are complete Σ-consistent sets. It’s not enough to construct
a model with a single world to make ϕ true, it will have to contain multiple
worlds and an accessibility relation. The complete Σ-consistent set contain-
ing ϕ will also contain other formulas of the form ψ and ♦χ. In all accessible
worlds, ψ has to be true; in at least one, χ has to be true. In order to accom-
plish this, we’ll simply take all possible complete Σ-consistent sets as the basis
for the set of worlds. A tricky part will be to figure out when a complete
Σ-consistent set should count as being accessible from another in our model.
We’ll show that in the model so defined, ϕ is true at a world—which is
also a complete Σ-consistent set—iff ϕ is an element of that set. If ϕ is Σ-
consistent, it will be an element of at least one complete Σ-consistent set (a
fact we’ll prove), and so there will be a world where ϕ is true. So we will have
a single model where every Σ-consistent formula ϕ is true at some world. This
single model is the canonical model for Σ.

41.2 Complete Σ-Consistent Sets


Suppose Σ is a set of modal formulas—think of them as the axioms or defining
principles of a normal modal logic. A set Γ is Σ-consistent iff Γ 0Σ ⊥, i.e., if
there is no derivation of ϕ1 → ( ϕ2 → · · · ( ϕn → ⊥) . . . ) from Σ, where each
ϕi ∈ Γ. We will construct a “canonical” model in which each world is taken
to be a special kind of Σ-consistent set: one which is not just Σ-consistent,
but maximally so, in the sense that it settles the truth value of every modal
formula: for every ϕ, either ϕ ∈ Γ or ¬ ϕ ∈ Γ:

Definition 41.1. A set Γ is complete Σ-consistent if and only if it is Σ-consistent


and for every ϕ, either ϕ ∈ Γ or ¬ ϕ ∈ Γ.

Complete Σ-consistent sets Γ have a number of useful properties. For one,


they are deductively closed, i.e., if Γ `Σ ϕ then ϕ ∈ Γ. This means in particu-
lar that every instance of a formula ϕ ∈ Σ is also ∈ Γ. Moreover, membership
in Γ mirrors the truth conditions for the propositional connectives. This will
be important when we define the “canonical model.”

Proposition 41.2. Suppose Γ is complete Σ-consistent. Then:

1. Γ is deductively closed in Σ.

2. Σ ⊆ Γ.

3. ⊥ ∈

4. ¬ ϕ ∈ Γ if and only if ϕ ∈
/ Γ.

5. ϕ ∧ ψ ∈ Γ iff ϕ ∈ Γ and ψ ∈ Γ

6. ϕ ∨ ψ ∈ Γ iff ϕ ∈ Γ or ψ ∈ Γ

Release: (None) ((None)) 551


CHAPTER 41. COMPLETENESS AND CANONICAL MODELS

7. ϕ → ψ ∈ Γ iff ϕ ∈
/ Γ or ψ ∈ Γ

Proof. 1. Suppose Γ `Σ ϕ but ϕ ∈


/ Γ. Then since Γ is complete Σ-consistent,
¬ ϕ ∈ Γ. This would make Γ inconsistent, since ϕ, ¬ ϕ `Σ ⊥.
2. If ϕ ∈ Σ then Γ `Σ ϕ, and ϕ ∈ Γ by deductive closure, i.e., case ??.

3. If ⊥ ∈ Γ, then Γ `Σ ⊥, so Γ would be Σ-inconsistent.

4. If ¬ ϕ ∈ Γ, then by consistency ϕ ∈
/ Γ; and if ϕ ∈
/ Γ then ϕ ∈ Γ since Γ is
complete Σ-consistent.

5. Exercise.

6. Suppose ϕ ∨ ψ ∈ Γ, and ϕ ∈ / Γ and ψ ∈


/ Γ. Since Γ is complete Σ-
consistent, ¬ ϕ ∈ Γ and ¬ψ ∈ Γ. Then ¬( ϕ ∨ ψ) ∈ Γ since ¬ ϕ →
(¬ψ → ¬( ϕ ∨ ψ)) is a tautological instance. This would mean that Γ
is Σ-inconsistent, a contradiction.

7. Exercise.

41.3 Lindenbaum’s Lemma


Lindenbaum’s Lemma establishes that every Σ-consistent set of formulas is
contained in at least one complete Σ-consistent set. Our construction of the
canonical model will show that for each complete Σ-consistent set ∆, there is a
world in the canonical model where all and only the formulas in ∆ are true. So
Lindenbaum’s Lemma guarantees that every Σ-consistent set is true at some
world in the canonical model.

Theorem 41.3 (Lindenbaum’s Lemma). If Γ is Σ-consistent then there is a com-


plete Σ-consistent set ∆ extending Γ.

Proof. Let ϕ0 , ϕ1 , . . . be an exhaustive listing of all formulas of the language


(repetitions are allowed). For instance, start by listing p0 , and at each stage
n ≥ 1 list the finitely many formulas of length n using only variables among
p0 , . . . , pn . We define sets of formulas ∆ n by induction on n, and we then set
S
∆ = n ∆ n . We first put ∆ 0 = Γ. Supposing that ∆ n has been defined, we
define ∆ n+1 by:
(
∆ n ∪ { ϕ n }, if ∆ n ∪ { ϕn } is consistent;
∆ n +1 =
∆ n ∪ {¬ ϕn }, otherwise.

If we now let ∆ = ∞
S
n =0 ∆ n .
We have to show that this definition actually yields a set ∆ with the re-
quired properties, i.e., Γ ⊆ ∆ and ∆ is complete Σ-consistent.

552 Release: (None) ((None))


41.4. MODALITIES AND COMPLETE CONSISTENT SETS

It’s obvious that Γ ⊆ ∆, since ∆ 0 ⊆ ∆ by construction, and ∆ 0 = Γ. In


fact, ∆ n ⊆ ∆ for all n, since ∆ is the union of all ∆ n . (Since in each step of
the construction, we add a formula to the set already constructed, ∆ n ⊆ ∆ n+1 ,
so since ⊆ is transitive, ∆ n ⊆ ∆ m whenever n ≤ m.) At each stage of the
construction, we either add ϕn or ¬ ϕn , and every formula appears (at least
once) in the list of all ϕn . So, for every ϕ either ϕ ∈ ∆ or ¬ ϕ ∈ ∆, so ∆ is
complete by definition.
Finally, we have to show, that ∆ is Σ-consistent. To do this, we show that
(a) if ∆ were Σ-inconsistent, then some ∆ n would be Σ-inconsistent, and (b)
all ∆ n are Σ-consistent.
So suppose ∆ were Σ-inconsistent. Then ∆ `Σ ⊥, i.e., there are ϕ1 , . . . , ϕk ∈
∆ such that Σ ` ϕ1 → ( ϕ2 → · · · ( ϕk → ⊥) . . . ). Since ∆ = ∞ n=0 , each ϕi ∈ ∆ ni
T

for some ni . Let n be the largest of these. Since ni ≤ n, ∆ ni ⊆ ∆ n . So, all ϕi are
in some ∆ n . This would mean ∆ n `Σ ⊥, i.e., ∆ n is Σ-inconsistent.
To show that each ∆ n is Σ-consistent, we use a simple induction on n.
∆ 0 = Γ, and we assumed Γ was Σ-consistent. So the claim holds for n = 0.
Now suppose it holds for n, i.e., ∆ n is Σ-consistent. ∆ n+1 is either ∆ n ∪ { ϕn }
is that is Σ-consistent, otherwise it is ∆ n ∪ {¬ ϕn }. In the first case, ∆ n+1 is
clearly Σ-consistent. However, by ????, either ∆ n ∪ { ϕn } or ∆ n ∪ {¬ ϕn } is
consistent, so ∆ n+1 is consistent in the other case as well.

Corollary 41.4. Γ `Σ ϕ if and only if ϕ ∈ ∆ for each complete Σ-consistent set ∆


extending Γ (including when Γ = ∅, in which case we get another characterization
of the modal system Σ.)

Proof. Suppose Γ `Σ ϕ, and let ∆ be any complete Σ-consistent set extending


Γ. If ϕ ∈
/ ∆ then by maximality ¬ ϕ ∈ ∆ and so ∆ `Σ ϕ (by monotony) and
∆ `Σ ¬ ϕ (by reflexivity), and so ∆ is inconsistent. Conversely if Γ 0Σ ϕ, then
Γ ∪ {¬ ϕ} is Σ-consistent, and by Lindenbaum’s Lemma there is a complete
consistent set ∆ extending Γ ∪ {¬ ϕ}. By consistency, ϕ ∈/ ∆.

41.4 Modalities and Complete Consistent Sets


When we construct a model MΣ whose set of worlds is given by the complete
Σ-consistent sets ∆ in some normal modal logic Σ, we will also need to define
an accessibility relation RΣ between such “worlds.” We want it to be the case
that the accessibility relation (and the assignment V Σ ) are defined in such a
way that MΣ , ∆ ϕ iff ϕ ∈ ∆. How should we do this?
Once the accessibility relation is defined, the definition of truth at a world
ensures that MΣ , ∆ ϕ iff MΣ , ∆0 ϕ for all ∆0 such that RΣ ∆∆0 . The proof
that MΣ , ∆ ϕ iff ϕ ∈ ∆ requires that this is true in particular for formulas
starting with a modal operator, i.e., MΣ , ∆ ϕ iff ϕ ∈ ∆. Combining this
requirement with the definition of truth at a world for ϕ yields:
ϕ ∈ ∆ iff ϕ ∈ ∆0 for all ∆0 with RΣ ∆∆0

Release: (None) ((None)) 553


CHAPTER 41. COMPLETENESS AND CANONICAL MODELS

Consider the left-to-right direction: it says that if ϕ ∈ ∆, then ϕ ∈ ∆0 for


any ϕ and any ∆0 with RΣ ∆∆0 . If we stipulate that RΣ ∆∆0 iff ϕ ∈ ∆0 for all
ϕ ∈ ∆, then this holds. We can write the condition on the right of the “iff”
more compactly as: { ϕ : ϕ ∈ ∆} ⊆ ∆0 .
So the question is: does this definition of RΣ in fact guarantee that ϕ ∈ ∆
iff MΣ , ∆ ϕ? Does it also guarantee that ♦ϕ ∈ ∆ iff MΣ , ∆ ♦ϕ? The next
few results will establish this.

Definition 41.5. If Γ is a set of formulas, let

Γ = {ψ : ψ ∈ Γ }
♦Γ = {♦ψ : ψ ∈ Γ }

and

−1 Γ = {ψ : ψ ∈ Γ }
♦−1 Γ = {ψ : ♦ψ ∈ Γ }

In other words, Γ is Γ with  in front of every formula in Γ; −1 Γ is


all the ’ed formulas of Γ with the initial ’s removed. This definition is not
terribly important on its own, but will simplify the notation considerably.
Note that −1 Γ ⊆ Γ:

−1 Γ = {ψ : ψ ∈ Γ }

i.e., it’s just the set of all those formulas of Γ that start with .

Lemma 41.6. If Γ `Σ ϕ then Γ `Σ ϕ.

Proof. If Γ `Σ ϕ then there are ψ1 , . . . , ψk ∈ Γ such that Σ ` ψ1 → (ψ2 →


· · · (ψn → ϕ) · · · ). Since Σ is normal, by rule RK, Σ ` ψ1 → (ψ2 → · · · (ψn →
ϕ) · · · ), where obviously ψ1 , . . . , ψk ∈ Γ. Hence, by definition, Γ `Σ
ϕ.

Lemma 41.7. If −1 Γ `Σ ϕ then Γ `Σ ϕ.

Proof. Suppose −1 Γ `Σ ϕ; then by ??, −1 Γ ` ϕ. But since −1 Γ ⊆ Γ,
also Γ `Σ ϕ by Monotony.

Proposition 41.8. If Γ is complete Σ-consistent, then ϕ ∈ Γ if and only if for


every complete Σ-consistent ∆ such that −1 Γ ⊆ ∆, it holds that ϕ ∈ ∆.

554 Release: (None) ((None))


41.5. CANONICAL MODELS

Proof. Suppose Γ is complete Σ-consistent. The “only if” direction is easy:


Suppose ϕ ∈ Γ and that −1 Γ ⊆ ∆. Since ϕ ∈ Γ, ϕ ∈ −1 Γ ⊆ ∆, so ϕ ∈ ∆.
For the “if” direction, we prove the contrapositive: Suppose ϕ ∈
/ Γ. Since
Γ is complete Σ-consistent, it is deductively closed, and hence Γ 0Σ ϕ. By
??, −1 Γ 0Σ ϕ. By ????, −1 Γ ∪ {¬ ϕ} is Σ-consistent. By Lindenbaum’s
Lemma, there is a complete Σ-consistent set ∆ such that −1 Γ ∪ {¬ ϕ} ⊆ ∆.
By consistency, ϕ ∈ / ∆.

Lemma 41.9. Suppose Γ and ∆ are complete Σ-consistent. Then: −1 Γ ⊆ ∆ if and
only if ♦∆ ⊆ Γ.

Proof. “Only if” direction: Assume −1 Γ ⊆ ∆ and suppose ♦ϕ ∈ ♦∆ (i.e.,


ϕ ∈ ∆). In order to show ♦ϕ ∈ Γ it suffices to show ¬ ϕ ∈ / Γ for then by
maximality ¬¬ ϕ ∈ Γ. Now, if ¬ ϕ ∈ Γ then by hypothesis ¬ ϕ ∈ ∆, against
the consistency of ∆ (since ϕ ∈ ∆). Hence ¬ ϕ ∈ / Γ, as required.
“If” direction: Assume ♦∆ ⊆ Γ. We argue contrapositively: suppose ϕ ∈ /∆
in order to show ϕ ∈ / Γ. If ϕ ∈ / ∆ then by maximality ¬ ϕ ∈ ∆ and so by
hypothesis ♦¬ ϕ ∈ Γ. But in a normal modal logic ♦¬ ϕ is equivalent to ¬ϕ,
and if the latter is in Γ, by consistency ϕ ∈/ Γ, as required.

Proposition 41.10. If Γ is complete Σ-consistent, then ♦ϕ ∈ Γ if and only if for


some complete Σ-consistent ∆ such that ♦∆ ⊆ Γ, it holds that ϕ ∈ ∆.

Proof. Suppose Γ is complete Σ-consistent. ♦ϕ ∈ Γ iff ¬¬ ϕ ∈ Γ by DUAL


and closure. ¬¬ ϕ ∈ Γ iff ¬ ϕ ∈ / Γ by ???? since Γ is complete Σ-consistent.
/ Γ iff, for some complete Σ-consistent ∆ with −1 Γ ⊆ ∆, ¬ ϕ ∈
By ??, ¬ ϕ ∈ / ∆.
Now consider any such ∆. By ??, −1 Γ ⊆ ∆ iff ♦∆ ⊆ Γ. Also, ¬ ϕ ∈ / ∆ iff
ϕ ∈ ∆ by ????. So ♦ϕ ∈ Γ iff, for some complete Σ-consistent ∆ with ♦∆ ⊆ Γ,
ϕ ∈ ∆.

41.5 Canonical Models


The canonical model for a modal system Σ is a specific model MΣ in which
the worlds are all complete Σ-consistent sets. Its accessibility relation RΣ and
valuation V Σ are defined so as to guarantee that the formulas true at a world ∆
are exactly the formulas making up ∆.

Definition 41.11. Let Σ be a normal modal logic. The canonical model for Σ is
MΣ = hW Σ , RΣ , V Σ i, where:

1. MΣ = {∆ : ∆ is complete Σ-consistent}.

2. RΣ ∆∆0 holds if and only if −1 ∆ ⊆ ∆0 .

3. V Σ ( p) = {∆ : p ∈ ∆}.

Release: (None) ((None)) 555


CHAPTER 41. COMPLETENESS AND CANONICAL MODELS

41.6 The Truth Lemma


The canonical model MΣ is defined in such a way that MΣ , ∆ ϕ iff ϕ ∈ ∆.
For propositional variables, the definition of V Σ yields this directly. We have
to verify that the equivalence holds for all formulas, however. We do this by
induction. The inductive step involves proving the equivalence for formulas
involving propositional operators (where we have to use ??) and the modal
operators (where we invoke the results of ??).

Proposition 41.12 (Truth Lemma). For every formula ϕ, MΣ , ∆ ϕ if and only if


ϕ ∈ ∆.

Proof. By induction on ϕ.

1. ϕ ≡ ⊥: MΣ , ∆ 1 ⊥ by ??, and ⊥ ∈
/ ∆ by ????.

2. ϕ ≡ p: MΣ , ∆ p iff ∆ ∈ V Σ ( p) by ??. Also, ∆ ∈ V Σ ( p) iff p ∈ ∆ by


definition of V Σ .

3. ϕ ≡ ¬ψ: MΣ , ∆ ¬ψ iff MΣ , ∆ 1 ψ (??) iff ψ ∈


/ ∆ (by inductive
hypothesis) iff ¬ψ ∈ ∆ (by ????).

4. ϕ ≡ ψ ∧ χ: Exercise.

5. ϕ ≡ ψ ∨ χ: MΣ , ∆ ψ ∨ χ iff MΣ , ∆ ψ or MΣ , ∆ χ (by ??) iff ψ ∈ ∆


or χ ∈ ∆ (by inductive hypothesis) iff ψ ∨ χ ∈ ∆ (by ????).

6. ϕ ≡ ψ → χ: Exercise.

7. ϕ ≡ ψ: First suppose that MΣ , ∆ ψ. By ??, for every ∆0 such


Σ 0
that R ∆∆ , M , ∆Σ 0 ψ. By inductive hypothesis, for every ∆0 such that
R ∆∆ , ψ ∈ ∆ . By definition of RΣ , for every ∆0 such that −1 ∆ ⊆ ∆0 ,
Σ 0 0

ψ ∈ ∆0 . By ??, ψ ∈ ∆.
Now assume ψ ∈ ∆. Let ∆0 ∈ W Σ be such that RΣ ∆∆0 , i.e., −1 ∆ ⊆
∆0 . Since ψ ∈ ∆, ψ ∈ −1 ∆. Consequently, ψ ∈ ∆0 . By inductive
hypothesis, MΣ , ∆0 ψ. Since ∆0 is arbitrary with RΣ ∆∆0 , for all ∆0 ∈ W Σ
such that RΣ ∆∆0 , MΣ , ∆0 ψ. By ??, MΣ , ∆ ψ.

8. ϕ ≡ ♦ψ: Exercise.

41.7 Determination and Completeness for K


We are now prepared to use the canonical model to establish completeness.
Completeness follows from the fact that the formulas true in the canonical
for Σ are exactly the Σ-derivable ones. Models with this property are said to
determine Σ.

556 Release: (None) ((None))


41.8. FRAME COMPLETENESS

Definition 41.13. A model M determines a normal modal logic Σ precisely


when M ϕ if and only if Σ ` ϕ, for all formulas ϕ.

Theorem 41.14 (Determination). MΣ ϕ if and only if Σ ` ϕ.

Proof. If MΣ ϕ, then for every complete Σ-consistent ∆, we have MΣ , ∆ ϕ.


Hence, by the Truth Lemma, ϕ ∈ ∆ for every complete Σ-consistent ∆, whence
by ?? (with Γ = ∅), Σ ` ϕ.
Conversely, if Σ ` ϕ then by ????, every complete Σ-consistent ∆ con-
tains ϕ, and hence by the Truth Lemma, MΣ , ∆ ϕ for every ∆ ∈ W Σ , i.e.,
M Σ ϕ.

Since the canonical model for K determines K, we immediately have com-


pleteness of K as a corollary:

Corollary 41.15. The basic modal logic K is complete with respect to the class of all
models, i.e., if  ϕ then K ` ϕ.

Proof. Contrapositively, if K 0 ϕ then by Determination MK 1 ϕ and hence ϕ


is not valid.

For the general case of completeness of a system Σ with respect to a class of


models, e.g., of KTB4 with respect to the class of reflexive, symmetric, tran-
sitive models, determination alone is not enough. We must also show that
the canonical model for the system Σ is a member of the class, which does
not follow obviously from the canoncial model construction—nor is it always
true!

41.8 Frame Completeness


The completeness theorem for K can be extended to other modal systems,
once we show that the canonical model for a given logic has the corresponding
frame property.

Theorem 41.16. If a normal modal logic Σ contains one of the formulas on the left-
hand side of ??, then the canonical model for Σ has the corresponding property on the
right-hand side.
If Σ contains . . . . . . the canonical model for Σ is:
D: ϕ → ♦ϕ serial;
T: ϕ → ϕ reflexive;
B: ϕ → ♦ϕ symmetric;
4: ϕ → ϕ transitive;
5: ♦ϕ → ♦ϕ euclidean.
Table 41.1: Basic correspondence facts.

Release: (None) ((None)) 557


CHAPTER 41. COMPLETENESS AND CANONICAL MODELS

Proof. We take each of these up in turn.


Suppose Σ contains D, and let ∆ ∈ W Σ ; we need to show that there is a
∆ such that RΣ ∆∆0 . It suffices to show that −1 ∆ is Σ-consistent, for then by
0

Lindenbaum’s Lemma, there is a complete Σ-consistent set ∆0 ⊇ −1 ∆, and


by definition of RΣ we have RΣ ∆∆0 . So, suppose for contradiction that −1 ∆
is not Σ-consistent, i.e., −1 ∆ `Σ ⊥. By ??, ∆ `Σ ⊥, and since Σ contains
D, also ∆ `Σ ♦⊥. But Σ is normal, so Σ ` ¬♦⊥ (??), whence also ∆ `Σ ¬♦⊥,
against the consistency of ∆.
Now suppose Σ contains T, and let ∆ ∈ W Σ . We want to show RΣ ∆∆, i.e.,
− 1
 ∆ ⊆ ∆. But if ϕ ∈ ∆ then by T also ϕ ∈ ∆, as desired.
Now suppose Σ contains B, and suppose RΣ ∆∆0 for ∆, ∆0 ∈ W Σ . We need
to show that RΣ ∆0 ∆, i.e., −1 ∆0 ⊆ ∆. By ??, this is equivalent to ♦∆ ⊆ ∆0 . So
suppose ϕ ∈ ∆. By B, also ♦ϕ ∈ ∆. By the hypothesis that RΣ ∆∆0 , we have
that −1 ∆ ⊆ ∆0 , and hence ♦ϕ ∈ ∆0 , as required.
Now suppose Σ contains 4, and suppose RΣ ∆ 1 ∆ 2 and RΣ ∆ 2 ∆ 3 . We need to
show RΣ ∆ 1 ∆ 3 . From the hypothesis we have both −1 ∆ 1 ⊆ ∆ 2 and −1 ∆ 2 ⊆
∆ 3 . In order to show RΣ ∆ 1 ∆ 3 it suffices to show −1 ∆ 1 ⊆ ∆ 3 . So let ψ ∈
−1 ∆ 1 , i.e., ψ ∈ ∆ 1 . By 4, also ψ ∈ ∆ 1 and by hypothesis we get, first,
that ψ ∈ ∆ 2 and, second, that ψ ∈ ∆ 3 , as desired.
Now suppose Σ contains 5, suppose RΣ ∆ 1 ∆ 2 and RΣ ∆ 1 ∆ 3 . We need to
show RΣ ∆ 2 ∆ 3 . The first hypothesis gives −1 ∆ 1 ⊆ ∆ 2 , and the second hy-
pothesis is equivalent to ♦∆ 3 ⊆ ∆ 2 , by ??. To show RΣ ∆ 2 ∆ 3 , by ??, it suffices
to show ♦∆ 3 ⊆ ∆ 2 . So let ♦ϕ ∈ ♦∆ 3 , i.e., ϕ ∈ ∆ 3 . By the second hypothesis
♦ϕ ∈ ∆ 1 and by 5, ♦ϕ ∈ ∆ 1 as well. But now the first hypothesis gives
♦ϕ ∈ ∆ 2 , as desired.

As a corollary we obtain completeness results for a number of systems.


For instance, we know that S5 = KT5 = KTB4 is complete with respect to
the class of all reflexive euclidean models, which is the same as the class of all
reflexive, symmetric and transitive models.

Theorem 41.17. Let CD , CT , CB , C4 , and C5 be the class of all serial, reflexive, sym-
metric, transitive, and euclidean models (respectively). Then for any schemas ϕ1 , . . . ,
ϕn among D, T, B, 4, and 5, the system Kϕ1 . . . ϕn is determined by the class of
models C = C ϕ1 ∩ · · · ∩ C ϕn .

Proposition 41.18. Let Σ be a normal modal logic; then:

1. If Σ contains the schema ♦ϕ → ϕ then the canonical model for Σ is partially


functional.

2. If Σ contains the schema ♦ϕ ↔ ϕ then the canonical model for Σ is functional.

3. If Σ contains the schema ϕ → ϕ then the canonical model for Σ is weakly
dense.

558 Release: (None) ((None))


41.8. FRAME COMPLETENESS

(see ?? for definitions of these frame properties).

Proof. 1. suppose that Σ contains the schema ♦ϕ → ϕ, to show that RΣ is


partially functional we need to prove that for any ∆ 1 , ∆ 2 , ∆ 3 ∈ W Σ , if
RΣ ∆ 1 ∆ 2 and RΣ ∆ 1 ∆ 3 then ∆ 2 = ∆ 3 . Since RΣ ∆ 1 ∆ 2 we have −1 ∆ 1 ⊆
∆ 2 and since RΣ ∆ 1 ∆ 3 also −1 ∆ 1 ⊆ ∆ 3 . The identity ∆ 2 = ∆ 3 will
follow if we can establish the two inclusions ∆ 2 ⊆ ∆ 3 and ∆ 3 ⊆ ∆ 2 . For
the first inclusion, let ϕ ∈ ∆ 2 ; then ♦ϕ ∈ ∆ 1 , and by the schema and
deductive closure of ∆ 1 also ϕ ∈ ∆ 1 , whence by the hypothesis that
RΣ ∆ 1 ∆ 3 , ϕ ∈ ∆ 3 . The second inclusion is similar.
2. This follows immediately from part ?? and the seriality proof in ??.
3. Suppose Σ contains the schema ϕ → ϕ and to show that RΣ is
weakly dense, let RΣ ∆ 1 ∆ 2 . We need to show that there is a complete
Σ-consistent set ∆ 3 such that RΣ ∆ 1 ∆ 3 and RΣ ∆ 3 ∆ 2 . Let:
Γ = −1 ∆ 1 ∪ ♦∆ 2 .
It suffices to show that Γ is Σ-consistent, for then by Lindenbaum’s
Lemma it can be extended to a complete Σ-consistent set ∆ 3 such that
−1 ∆ 1 ⊆ ∆ 3 and ♦∆ 2 ⊆ ∆ 3 , i.e., RΣ ∆ 1 ∆ 3 and RΣ ∆ 3 ∆ 2 (by ??).
Suppose for contradiction that Γ is not consistent. Then there are formu-
las ϕ1 , . . . , ϕn ∈ ∆ 1 and ψ1 , . . . , ψm ∈ ∆ 2 such that
ϕ1 , . . . , ϕn , ♦ψ1 , . . . , ♦ψm `Σ ⊥.
Since ♦(ψ1 ∧ · · · ∧ ψm ) → (♦ψ1 ∧ · · · ∧ ♦ψm ) is derivable in every normal
modal logic, we argue as follows, contradicting the consistency of ∆ 2 :

ϕ1 , . . . , ϕn , ♦ψ1 , . . . , ♦ψm `Σ ⊥
⇒ ϕ1 , . . . , ϕn `Σ (♦ψ1 ∧ · · · ∧ ♦ψm ) → ⊥, deduction theorem;
⇒ ϕ1 , . . . , ϕn `Σ ♦(ψ1 ∧ · · · ∧ ψm ) → ⊥, Σ is normal;
⇒ ϕ1 , . . . , ϕn `Σ ¬(ψ1 ∧ · · · ∧ ψm ), PL ;
⇒ ϕ1 , . . . , ϕn `Σ ¬(ψ1 ∧ · · · ∧ ψm ), ??;
⇒ ϕ1 , . . . , ϕn `Σ ¬(ψ1 ∧ · · · ∧ ψm ), by the schema;
⇒ ∆ 1 `Σ ¬(ψ1 ∧ · · · ∧ ψm ), Monotony;
⇒ ¬(ψ1 ∧ · · · ∧ ψm ) ∈ ∆ 1 , deductive closure;
⇒ ¬(ψ1 ∧ · · · ∧ ψm ) ∈ ∆ 2 , since RΣ ∆ 1 ∆ 2 .

On the strength of these examples, one might think that every system Σ of
modal logic is complete, in the sense that it proves every formula which is valid
in every frame in which every theorem of Σ is valid. Unfortunately, there are
many systems that are not complete in this sense.

Release: (None) ((None)) 559


CHAPTER 41. COMPLETENESS AND CANONICAL MODELS

Problems
Problem 41.1. Complete the proof of ??.

Problem 41.2. Show that if Γ is complete Σ-consistent, then ♦ϕ ∈ Γ if and


only if there is a complete Σ-consistent ∆ such that −1 Γ ⊆ ∆ and ϕ ∈ ∆. Do
this without using ??.

Problem 41.3. Complete the proof of ??.

560 Release: (None) ((None))


Chapter 42

Filtrations and Decidability

42.1 Introduction
One important question about a logic is always whether it is decidable, i.e., if
there is an effective procedure which will answer the question “is this formula
valid.” Propositional logic is decidable: we can effectively test if a formula is
a tautology by constructing a truth table, and for a given formula, the truth
table is finite. But we can’t obviously test if a modal formula is true in all
models, for there are infinitely many of them. We can list all the finite models
relevant to a given formula, since only the assignment of subsets of worlds
to propositional variables which actually occur in the formula are relevant. If
the accessibility relation is fixed, the possible different assignments V ( p) are
just all the subsets of W, and if |W | = n there are 2n of those. If our formula ϕ
contains m propositional variables there are then 2nm different models with n
worlds. For each one, we can test if ϕ is true at all worlds, simply by comput-
ing the truth value of ϕ in each. Of course, we also have to check all possible
accessibility relations, but there are only finitely many relations on n worlds
2
as well (specifically, the number of subsets of W × W, i.e., 2n .
If we are not interested in the logic K, but a logic defined by some class of
models (e.g., the reflexive transitive models), we also have to be able to test
if the accessibility relation is of the right kind. We can do that whenever the
frames we are interested in are definable by modal formulas (e.g., by testing if
T and 4 valid in the frame). So, the idea would be to run through all the finite
frames, test each one if it is a frame in the class we’re interested in, then list all
the possible models on that frame and test if ϕ is true in each. If not, stop: ϕ
is not valid in the class of models of interest.
There is a problem with this idea: we don’t know when, if ever, we can stop
looking. If the formula has a finite countermodel, our procedure will find it.
But if it has no finite countermodel, we won’t get an answer. The formula may
be valid (no countermodels at all), or it have only an infinite countermodel,
which we’ll never look at. This problem can be overcome if we can show that

561
CHAPTER 42. FILTRATIONS AND DECIDABILITY

every formula that has a countermodel has a finite countermodel. If this is the
case we say the logic has the finite model property.
But how would we show that a logic has the finite model property? One
way of doing this would be to find a way to turn an infinite (counter)model
of ϕ into a finite one. If that can be done, then whenever there is a model
in which ϕ is not true, then the resulting finite model also makes ϕ not true.
That finite model will show up on our list of all finite models, and we will
eventually determine, for every formula that is not valid, that it isn’t. Our
procedure won’t terminate if the formula is valid. If we can show in addition
that there is some maximum size that the finite model our procedure provides
can have, and that this maximum size depends only on the formula ϕ, we
will have a size up to which we have to test finite models in our search for
countermodels. If we haven’t found a countermodel by then, there are none.
Then our procedure will, in fact, decide the question “is ϕ valid?” for any
formula ϕ.
A strategy that often works for turning infinite structures into finite struc-
tures is that of “identifying” elements of the structure which behave the same
way in relevant respects. If there are infinitely many worlds in M that be-
have the same in relevant respects, then we might hope that there are only
finitely many “classes” of such worlds. In other words, we partition the set
of worlds in the right way. Each partition contains infinitely many worlds,
but there are only finitely many partitions. Then we define a new model M∗
where the worlds are the partitions. Finitely many partitions in the old model
give us finitely many worlds in the new model, i.e., a finite model. Let’s call
the partition a world w is in [w]. We’ll want it to be the case that M, w ϕ iff
M∗ , [w] ϕ, since we want the new model to be a countermodel to ϕ if the old
one was. This requires that we define the partition, as well as the accessibility
relation of M∗ in the right way.
To see how this would go, first imagine we have no accessibility relation.
M, w ψ iff for some v ∈ W, M, v ψ, and the same for M∗ , except with
[w] and [v]. As a first idea, let’s say that two worlds u and v are equivalent
(belong to the same partition) if they agree on all propositional variables in M,
i.e., M, u p iff M, v p. Let V ∗ ( p) = {[w] : M, w p}. Our aim is to show
that M, w ϕ iff M∗ , [w] ϕ. Obviously, we’d prove this by induction: The
base case would be ϕ ≡ p. First suppose M, w p. Then [w] ∈ V ∗ by
definition, so M∗ , [w] p. Now suppose that M∗ , [w] p. That means that

[w] ∈ V ( p), i.e., for some v equivalent to w, M, v p. But “w equivalent to v”
means “w and v make all the same propositional variables true,” so M, w p.
Now for the inductive step, e.g., ϕ ≡ ¬ψ. Then M, w ¬ψ iff M, w 1 ψ
iff M∗ , [w] 1 ψ (by inductive hypothesis) iff M∗ , [w] ¬ψ. Similarly for the
other non-modal operators. It also works for : suppose M∗ , [w] ψ. That
means that for every [u], M∗ , [u] ψ. By inductive hypothesis, for every u,
M, u ψ. Consequently, M, w ψ.

562 Release: (None) ((None))


42.2. PRELIMINARIES

In the general case, where we have to also define the accessibility relation
for M∗ , things are more complicated. We’ll call a model M∗ a filtration if its
accessibility relation R∗ satisfies the conditions required to make the induc-
tive proof above go through. Then any filtration M∗ will make ϕ true at [w]
iff M makes ϕ true at w. However, now we also have to show that there are
filtrations, i.e., we can define R∗ so that it satisfies the required conditions. In
order for this to work, however, we have to require that worlds u, v count as
equivalent not just when they agree on all propositional variables, but on all
sub-formulas of ϕ. Since ϕ has only finitely many sub-formulas, this will still
guarantee that the filtration is finite. There is not just one way to define a fil-
tration, and in order to make sure that the accessibility relation of the filtration
satisfies the required properties (e.g., reflexive, transitive, etc.) we have to be
inventive with the definition of R∗ .

42.2 Preliminaries
Filtrations allow us to establish the decidability of our systems of modal logic
by showing that they have the finite model property, i.e., that any formula that
is true (false) in a model is also true (false) in a finite model. Filtrations are
defined relative to sets of formulas which are closed under subformulas.

Definition 42.1. A set Γ of formulas is closed under subformulas if it contains


every subformula of a formula in Γ. Further, Γ is modally closed if it is closed
under subformulas and moreover ϕ ∈ Γ implies ϕ, ♦ϕ ∈ Γ.

For instance, given a formula ϕ, the set of all its sub-formulas is closed
under sub-formulas. When we’re defining a filtration of a model through the
set of sub-formulas of ϕ, it will have the property we’re after: it makes ϕ true
(false) iff the original model does.
The set of worlds of a filtration of M through Γ is defined as the set of all
equivalence classes of the following equivalence relation.

Definition 42.2. Let M = hW, R, V i and suppose Γ is closed under sub-


formulas. Define a relation ≡ on W to hold of any two worlds that make
the same formulas from Γ true, i.e.:

u≡v if and only if ∀ ϕ ∈ Γ : M, u ϕ ⇔ N, v ϕ.

The equivalence class [w]≡ of a world w, or [w] for short, is the set of all worlds
≡-equivalent to w:
[ w ] = { v : v ≡ w }.

Proposition 42.3. Given M and Γ, ≡ as defined above is an equivalence relation,


i.e., it is reflexive, symmetric, and transitive.

Release: (None) ((None)) 563


CHAPTER 42. FILTRATIONS AND DECIDABILITY

Proof. The relation ≡ is reflexive, since w makes exactly the same formulas
from Γ true as itself. It is symmetric since if u makes the same formulas from Γ
true as v, the same holds for v and u. It is also transitive, since if u makes the
same formulas from Γ true as v, and v as w, then u makes the same formulas
from Γ true as w.

The relation ≡, like any equivalence relation, divides W into partitions, i.e.,
subsets of W which are pairwise disjoint, and together cover all of W. Every
w ∈ W is an element of one of the partitions, namely of [w], since w ≡ w. So
the partitions [w] cover all of W. They are pairwise disjoint, for if u ∈ [w] and
u ∈ [v], then u ≡ w and u ≡ v, and by symmetry and transitivity, w ≡ v, and
so [w] = [v].

42.3 Filtrations
Rather than define “the” filtration of M through Γ, we define when a model M∗
counts as a filtration of M. All filtrations have the same set of worlds W ∗ and
the same valuation V ∗ . But different filtrations may have different accessibil-
ity relations R∗ . To count as a filtration, R∗ has to satisfy a number of condi-
tions, however. These conditions are exactly what we’ll require to prove the
main result, namely that M, w ϕ iff M∗ , [w] ϕ, provided ϕ ∈ Γ.

Definition 42.4. Let Γ be closed under subformulas and M = hW, R, V i. A


filtration of M through Γ is any model M∗ = hW ∗ , R∗ , V ∗ i, where:

1. W ∗ = {[w] : w ∈ W };

2. For any u, v ∈ W:

a) If Ruv then R∗ [u][v];


b) If R∗ [u][v] then for any ϕ ∈ Γ, if M, u ϕ then M, v ϕ;
c) If R∗ [u][v] then for any ♦ϕ ∈ Γ, if M, v ϕ then M, u ♦ϕ.

3. V ∗ ( p) = {[u] : u ∈ V ( p)}.

It’s worthwhile thinking about what V ∗ ( p) is: the set consisting of the
equivalence classes [w] of all worlds w where p is true in M. On the one
hand, if w ∈ V ( p), then [w] ∈ V ∗ ( p) by that definition. However, it is not
necessarily the case that if [w] ∈ V ∗ ( p), then w ∈ V ( p). If [w] ∈ V ∗ ( p) we are
only guaranteed that [w] = [u] for some u ∈ V ( p). Of course, [w] = [u] means
that w ≡ u. So, when [w] ∈ V ∗ ( p) we can (only) conclude that w ≡ u for some
u ∈ V ( p ).

Theorem 42.5. If M∗ is a filtration of M through Γ, then for every ϕ ∈ Γ and


w ∈ W, we have M, w ϕ if and only if M∗ , [w] ϕ.

564 Release: (None) ((None))


42.3. FILTRATIONS

Proof. By induction on ϕ, using the fact that Γ is closed under subformulas.


Since ϕ ∈ Γ and Γ is closed under sub-formulas, all sub-formulas of ϕ are
also ∈ Γ. Hence in each inductive step, the induction hypothesis applies to
the sub-formulas of ϕ.

1. ϕ ≡ ⊥: Neither M, w ϕ nor M∗ , w ϕ.

2. ϕ ≡ p: The left-to-right direction is immediate, as M, w ϕ only if


w ∈ V ( p), which implies [w] ∈ V ∗ ( p), i.e., M∗ , [w] ϕ. Conversely,
suppose M∗ , [w] ϕ, i.e., [w] ∈ V ∗ ( p). Then for some v ∈ V ( p), w ≡ v.
Of course then also M, v p. Since w ≡ v, w and v make the same
formulas from Γ true. Since by assumption p ∈ Γ and M, v p, M, w
ϕ.

3. ϕ ≡ ¬ψ: M, w ϕ iff M, w 1 ψ. By induction hypothesis, M, w 1 ψ iff


M∗ , [w] 1 ψ. Finally, M∗ , [w] 1 ψ iff M∗ , [w] ϕ.

4. Exercise.

5. ϕ ≡ (ψ ∨ χ): M, w ϕ iff M, w ψ or M, w χ. By induction


hypothesis, M, w ψ iff M∗ , [w] ψ, and M, w χ iff M∗ , [w] χ.
And M∗ , [w] ϕ iff M∗ , [w] ψ or M∗ , [w] χ.

6. Exercise.

7. ϕ ≡ ψ: Suppose M, w ϕ; to show that M∗ , [w] ϕ, let v be such that


R∗ [w][v]. From ????, we have that M, v ψ, and by inductive hypothesis
M∗ , [v] ψ. Since v was arbitrary, M∗ , [w] ϕ follows.
Conversely, suppose M∗ , [w] ϕ and let v be arbitrary such that Rwv.
From ????, we have R∗ [w][v], so that M∗ , [v] ψ; by inductive hypothe-
sis M, v ψ, and since v was arbitrary, M, u ϕ.

8. Exercise.

What holds for truth at worlds in a model also holds for truth in a model
and validity in a class of models.

Corollary 42.6. Let Γ be closed under subformulas. Then:

1. If M∗ is a filtration of M through Γ then for any ϕ ∈ Γ: M ϕ if and only if


M∗ ϕ.

2. If C is a class of models and Γ (C) is the class of Γ-filtrations of models in C ,


then any formula ϕ ∈ Γ is valid in C if and only if it is valid in Γ (C).

Release: (None) ((None)) 565


CHAPTER 42. FILTRATIONS AND DECIDABILITY

42.4 Examples of Filtrations


We have not yet shown that there are any filtrations. But indeed, for any
model M, there are many filtrations of M through Γ. We identify two, in par-
ticular: the finest and coarsest filtrations. Filtrations of the same models will
differ in their accessibility relation (as ?? stipulates directly what W ∗ and V ∗
should be). The finest filtration will have as few related worlds as possible,
whereas the coarsest will have as many as possible.

Definition 42.7. Where Γ is closed under subformulas, the finest filtration M∗


of a model M is defined by putting:

R∗ [u][v] if and only if ∃u0 ∈ [u] ∃v0 ∈ [v] : Ru0 v0 .

Proposition 42.8. The finest filtration M∗ is indeed a filtration.

Proof. We need to check that R∗ , so defined, satisfies ????. We check the three
conditions in turn.
If Ruv then since u ∈ [u] and v ∈ [v], also R∗ [u][v], so ?? is satisfied.
For ??, suppose ϕ ∈ Γ, R∗ [u][v], and M, u ϕ. By definition of R∗ ,
there are u ≡ u and v ≡ v such that Ru v . Since u and u0 agree on Γ, also
0 0 0 0

M, u0 ϕ, so that M, v0 ϕ. By closure of Γ under sub-formulas, v and v0


agree on ϕ, so M, v ϕ, as desired.
We leave the verification of ?? as an exercise.

Definition 42.9. Where Γ is closed under subformulas, the coarsest filtration M∗


of a model M is defined by putting R∗ [u][v] if and only if both of the following
conditions are met:

1. If ϕ ∈ Γ and M, u ϕ then M, v ϕ;

2. If ♦ϕ ∈ Γ and M, v ϕ then M, u ♦ϕ.

Proposition 42.10. The coarsest filtration M∗ is indeed a filtration.

Proof. Given the definition of R∗ , the only condition that is left to verify is
the implication from Ruv to R∗ [u][v]. So assume Ruv. Suppose ϕ ∈ Γ and
M, u ϕ; then obviously M, v ϕ, and ?? is satisfied. Suppose ♦ϕ ∈ Γ and
M, v ϕ. Then M, u ♦ϕ since Ruv, and ?? is satisfied.

Example 42.11. Let W = Z+ , Rnm iff m = n + 1, and V ( p) = {2n : n ∈ N}.


The model M = hW, R, V i is depicted in ??. The worlds are 1, 2, etc.; each
world can access exactly one other world—its successor, and p is true at all
and only the even numbers.
Now let Γ be the set of sub-formulas of p → p, i.e., { p, p, p → p}. p
is true at all and only the even numbers, p is true at all and only the odd
numbers, so p → p is true at all and only the even numbers. In other words,

566 Release: (None) ((None))


42.5. FILTRATIONS ARE FINITE

1 2 3 4
¬p p ¬p p

[1] [2] [1] [2]


¬p p ¬p p

Figure 42.1: An infinite model and its filtrations.

every odd number makes p true and p and p → p false; every even number
makes p and p → p true, but p false. So W ∗ = {[1], [2]}, where [1] =
{1, 3, 5, . . . } and [2] = {2, 4, 6, . . . }. Since 2 ∈ V ( p), [2] ∈ V ∗ ( p); since 1 ∈
/
V ( p ), [1] ∈/ V ∗ ( p). So V ∗ ( p) = {[2]}.
Any filtration based on W ∗ must have an accessibility relation that in-
cludes h[1], [2]i, h[2], [1]i: since R12, we must have R∗ [1][2] by ????, and since
R23 we must have R∗ [2][3], and [3] = [1]. It cannot include h[1], [1]i: if it did,
we’d have R∗ [1][1], M, 1 p but M, 1 p, contradicting ??. Nothing re-
quires or rules out that R∗ [2][2]. So, there are two possible filtrations of M,
corresponding to the two accessibility relations

{h[1], [2]i, h[2], [1]i} and {h[1], [2]i, h[2], [1]i, h[2], [2]i}.

In either case, p and p → p are false and p is true at [1]; p and p → p are
true and p is false at [2].

42.5 Filtrations are Finite


We’ve defined filtrations for any set Γ that is closed under sub-formulas. Noth-
ing in the definition itself guarantees that filtrations are finite. In fact, when Γ
is infinite (e.g., is the set of all formulas), it may well be infinite. However, if
Γ is finite (e.g., when it is the set of sub-formulas of a given formula ϕ), so is
any filtration through Γ.

Proposition 42.12. If Γ is finite then any filtration M∗ of a model M through Γ is


also finite.

Proof. The size of W ∗ is the number of different classes [w] under the equiva-
lence relation ≡. Any two worlds u, v in such class—that is, any u and v such
that u ≡ v—agree on all formulas ϕ in Γ, ϕ ∈ Γ either ϕ is true at both u and
v, or at neither. So each class [w] corresponds to subset of Γ, namely the set of
all ϕ ∈ Γ such that ϕ is true at the worlds in [w]. No two different classes [u]
and [v] correspond to the same subset of Γ. For if the set of formulas true at u
and that of formulas true at v are the same, then u and v agree on all formulas

Release: (None) ((None)) 567


CHAPTER 42. FILTRATIONS AND DECIDABILITY

in Γ, i.e., u ≡ v. But then [u] = [v]. So, there is an injective function from
W ∗ to ℘( Γ ), and hence |W ∗ | ≤ |℘( Γ )|. Hence if Γ contains n sentences, the
cardinality of W ∗ is no greater than 2n .

42.6 K and S5 have the Finite Model Property


Definition 42.13. A system Σ of modal logic is said to have the finite model
property if whenever a formula ϕ is true at a world in a model of Σ then ϕ is
true at a world in a finite model of Σ.

Proposition 42.14. K has the finite model property.

Proof. K is the set of valid formulas, i.e., any model is a model of K. By ??, if
Mϕ[w], then M∗ ϕ[w] for any filtration of M through the set Γ of sub-formulas
of ϕ. Any formula only has finitely many sub-formulas, so Γ is finite. By ??,
|W ∗ | ≤ 2n , where n is the number of formulas in Γ. And since K imposes no
restriction on models, M∗ is a K-model.

To show that a logic L has the finite model property via filtrations it is
essential that the filtration of an L-model is itself a L-model. Often this re-
quires a fair bit of work, and not any filtration yields a L-model. However, for
universal models, this still holds.

Proposition 42.15. Let U be the class of universal models (see ??) and UFin the class
of all finite universal models. Then any formula ϕ is valid in U if and only if it is
valid in UFin .

Proof. Finite universal models are universal models, so the left-to-right direc-
tion is trivial. For the right-to left direction, suppose that ϕ is false at some
world w in a universal model M. Let Γ contain ϕ as well as all of its subfor-
mulas; clearly Γ is finite. Take a filtration M∗ of M; then M∗ is finite by ??, and
by ??, ϕ is false at [w] in M∗ . It remains to observe that M∗ is also universal:
given u and v, by hypothesis Ruv and by Definition ????, also R∗ [u][v].

Corollary 42.16. S5 has the finite model property.

Proof. By ??, if ϕ is true at a world in some reflexive and euclidean model then
it is true at a world in a universal model. By ??, it is true at a world in a finite
universal model (namely the filtration of the model through the set of sub-
formulas of ϕ). Every universal model is also reflexive and euclidean; so ϕ is
true at a world in a finite reflexive euclidean model.

568 Release: (None) ((None))


42.7. S5 IS DECIDABLE

42.7 S5 is Decidable
The finite model property gives us an easy way to show that systems of modal
logic given by schemas are decidable (i.e., that there is a computable procedure
to determine whether a formulas is derivable in the system or not).

Theorem 42.17. S5 is decidable.

Proof. Let ϕ be given, and suppose the propositional variables occurring in ϕ


are among p1 , . . . , pk . Since for each n there are only finitely many models
with n worlds assigning a value to p1 , . . . , pk , we can enumerate, in parallel,
all the theorems of S5 by generating proofs in some systematic way; and all
the models containing 1, 2, . . . worlds and checking whether ϕ fails at a world
in some such model. Eventually one of the two parallel processes will give
an answer, as by ?? and ??, either ϕ is derivable or it fails in a finite universal
model.

The above proof works for S5 because filtrations of universal models are
automatically universal. The same holds for reflexivity and seriality, but more
work is needed for other properties.

42.8 Filtrations and Properties of Accessibility


As noted, filtrations of universal, serial, and reflexive models are always also
universal, serial, or reflexive. But not every filtration of a symmetric or tran-
sitive model is symmetric or transitive, respectively. In some cases, however,
it is possible to define filtrations so that this does hold. In order to do so, we
proceed as in the definition of the coarsest filtration, but add additional condi-
tions to the definition of R∗ . Let Γ be closed under sub-formulas. Consider the
relations Ci (u, v) in ?? between worlds u, v in a model M = hW, R, V i. We can
define R ∗ [u][v] on the basis of combinations of these conditions. For instance,
if we stipulate that R∗ [u][v] iff the condition C1 (u, v) holds, we get exactly the
coarsest filtration. If we stipulate R∗ [u][v] iff both C1 (u, v) and C2 (u, v) hold,
we get a different filtration. It is “finer” than the coarsest since fewer pairs of
worlds satisfy C1 (u, v) and C2 (u, v) than C1 (u, v) alone.
if ϕ ∈ Γ and M, u ϕ then M, v ϕ; and
C1 (u, v):
if ♦ϕ ∈ Γ and M, v ϕ then M, u ♦ϕ;
if ϕ ∈ Γ and M, v ϕ then M, u ϕ; and
C2 (u, v):
if ♦ϕ ∈ Γ and M, u ϕ then M, v ♦ϕ;
if ϕ ∈ Γ and M, u ϕ then M, v ϕ; and
C3 (u, v):
if ♦ϕ ∈ Γ and M, v ♦ϕ then M, u ♦ϕ;
if ϕ ∈ Γ and M, v ϕ then M, u ϕ; and
C4 (u, v):
if ♦ϕ ∈ Γ and M, u ♦ϕ then M, v ♦ϕ;
Table 42.1: Conditions on possible worlds for defining filtrations.

Release: (None) ((None)) 569


CHAPTER 42. FILTRATIONS AND DECIDABILITY

Theorem 42.18. Let M = hW, R, Pi be a model, Γ closed under sub-formulas. Let


W ∗ and V ∗ be defined as in ??. Then:

1. Suppose R∗ [u][v] if and only if C1 (u, v) ∧ C2 (u, v). Then R∗ is symmetric,


and M∗ = hW ∗ , R∗ , V ∗ i is a filtration if M is symmetric.

2. Suppose R∗ [u][v] if and only if C1 (u, v) ∧ C3 (u, v). Then R∗ is transitive, and
M∗ = hW ∗ , R∗ , V ∗ i is a filtration if M is transitive.

3. Suppose R∗ [u][v] if and only if C1 (u, v) ∧ C2 (u, v) ∧ C3 (u, v) ∧ C4 (u, v).


Then R∗ is symmetric and transitive, and M∗ = hW ∗ , R∗ , V ∗ i is a filtration if
M is symmetric and transitive.

4. Suppose R∗ is defined as R∗ [u][v] if and only if C1 (u, v) ∧ C3 (u, v) ∧ C4 (u, v).


Then R∗ is transitive and euclidean, and M∗ = hW ∗ , R∗ , V ∗ i is a filtration if
M is transitive and euclidean.

Proof. 1. It’s immediate that R∗ is symmetric, since C1 (u, v) ⇔ C2 (v, u)


and C2 (u, v) ⇔ C1 (v, u). So it’s left to show that if M is symmetric then
M∗ is a filtration through Γ. Condition C1 (u, v) guarantees that ?? and
?? of ?? are satisfied. So we just have to verify ????, i.e., that Ruv implies
R∗ [u][v].
So suppose Ruv. To show R∗ [u][v] we need to establish that C1 (u, v) and
C2 (u, v). For C1 : if ϕ ∈ Γ and M, u ϕ then also M, v ϕ (since
Ruv). Similarly, if ♦ϕ ∈ Γ and M, v ϕ then M, u ♦ϕ since Ruv. For
C2 : if ϕ ∈ Γ and M, v ϕ then Ruv implies Rvu by symmetry, so
that M, u ϕ. Similarly, if ♦ϕ ∈ Γ and M, u ϕ then M, v ♦ϕ (since
Rvu by symmetry).

2. Exercise.

3. Exercise.

4. Exercise.

42.9 Filtrations of Euclidean Models


The approach of ?? does not work in the case of models that are euclidean
or serial and euclidean. Consider the model at the top of ??, which is both
euclidean and serial. Let Γ = { p, p}. When taking a filtration through Γ,
then [w1 ] = [w3 ] since w1 and w3 are the only worlds that agree on Γ. Any
filtration will also have the arrow inherited from M, as depicted in ??. That
model isn’t euclidean. Moreover, we cannot add arrows to that model in order
to make it euclidean. We would have to add double arrows between [w2 ] and

570 Release: (None) ((None))


42.9. FILTRATIONS OF EUCLIDEAN MODELS

¬ p w1 w2 p
p p

¬ p w3 w4 p w5 ¬ p
p 1p 1p

Figure 42.2: A serial and euclidean model.

[ w2 ] p

p
¬ p [ w1 ] [ w1 ] = [ w3 ]

p

[ w4 ] p [ w5 ] ¬ p

1p 1p

Figure 42.3: The filtration of the model in ??.

[w4 ], and then also between w2 and w5 . But p is supposed to be true at w2 ,


while p is false at w5 .
In particular, to obtain a euclidean flitration it is not enough to consider
filtrations through arbitrary Γ’s closed under sub-formulas. Instead we need
to consider sets Γ that are modally closed (see ??). Such sets of sentences are
infinite, and therefore do not immediately yield a finite model property or the
decidability of the corresponding system.

Theorem 42.19. Let Γ be modally closed, M = hW, R, V i, and M∗ = hW ∗ , R∗ , V ∗ i


be a coarsest filtration of M.

1. If M is symmetric, so is M∗ .

2. If M is transitive, so is M∗ .

3. If M is euclidean, so is M∗ .

Proof. 1. If M∗ is a coarsest filtration, then by definition R∗ [u][v] holds if


and only if C1 (u, v). For transitivity, suppose C1 (u, v) and C1 (v, w); we
have to show show C1 (u, w). Suppose M, u ϕ; then M, u ϕ

Release: (None) ((None)) 571


CHAPTER 42. FILTRATIONS AND DECIDABILITY

since 4 is valid in all transitive models; since ϕ ∈ Γ by closure, also by


C1 (u, v), M, v ϕ and by C1 (v, w), also M, w ϕ. Suppose M, w ϕ;
then M, v ♦ϕ by C1 (v, w), since ♦ϕ ∈ Γ by modal closure. By C1 (u, v),
we get M, u ♦♦ϕ since ♦♦ϕ ∈ Γ by modal closure. Since 4♦ is valid
in all transitive models, M, u ♦ϕ.
2. Exercise. Use the fact that both 5 and 5♦ are valid in all euclidean mod-
els.
3. Exercise. Use the fact that B and B♦ are valid in all symmetric models.

Problems
Problem 42.1. Complete the proof of ??
Problem 42.2. Complete the proof of ??.
Problem 42.3. Consider the following model M = hW, R, V i where W = {0σ :
σ ∈ B∗ }, the set of sequences of 0s and 1s starting with 0, with Rσσ0 iff σ0 = σ0
or σ0 = σ1, and V ( p) = {σ0 : σ ∈ B∗ } and V (q) = {σ1 : σ ∈ B∗ \ {1}}. Here’s
a picture:

000

p
00
¬q
p
001
¬q
¬p
0 q
p
¬q 010

p
01
¬q
¬p
011
q
¬p
q

We have M, w 1 ( p ∨ q) → (p ∨ q) for every w.


Let Γ be the set of sub-formulas of ( p ∨ q) → (p ∨ q). What are W ∗
and V ∗ ? What is the accessibility relation of the finest filtration of M? Of the
coarsest?
Problem 42.4. Show that any filtration of a serial or reflexive model is also
serial or reflexive (respectively).

572 Release: (None) ((None))


42.9. FILTRATIONS OF EUCLIDEAN MODELS

Problem 42.5. Find a non-symmetric (non-transitive, non-euclidean) filtration


of a symmetric (transitive, euclidean) model.

Problem 42.6. Show that any filtration of a serial or reflexive model is also
serial or reflexive (respectively).

Problem 42.7. Find a non-symmetric (non-transitive, non-euclidean) filtration


of a symmetric (transitive, euclidean) model.

Problem 42.8. Complete the proof of ??.

Problem 42.9. Complete the proof of ??.

Release: (None) ((None)) 573


Chapter 43

Modal Tableaux

Draft chapter on prefixed tableaux for modal logic. Needs more ex-
amples, completeness proofs, and discussion of how one can find coun-
termodels from unsuccessful searches for closed tableaux.

43.1 Introduction
Tableaux are certain (downward-branching) trees of signed formulas, i.e., pairs
consisting of a truth value sign (T or F) and a sentence

T ϕ or F ϕ.

A tableau begins with a number of assumptions. Each further signed formula


is generated by applying one of the inference rules. Some inference rules add
one or more signed formulas to a tip of the tree; others add two new tips,
resulting in two branches. Rules result in signed formulas where the formula
is less complex than that of the signed formula to which it was applied. When
a branch contains both T ϕ and F ϕ, we say the branch is closed. If every branch
in a tableau is closed, the entire tableau is closed. A closed tableau consititues
a derivation that shows that the set of signed formulas which were used to
begin the tableau are unsatisfiable. This can be used to define a ` relation:
Γ ` ϕ iff there is some finite set Γ0 = {ψ1 , . . . , ψn } ⊆ Γ such that there is a
closed tableau for the assumptions

{F ϕ, Tψ1 , . . . , Tψn }.

For modal logics, we have to both extend the notion of signed formula
and add rules that cover  and ♦ In addition to a sign(T or F), formulas in
modal tableaux also have prefixes σ. The prefixes are non-empty sequences of
positive integers, i.e., σ ∈ (Z+ )∗ \ {Λ}. When we write such prefixes without

574
43.2. RULES FOR K

σ T¬ ϕ σ F ¬ϕ
¬T ¬F
σFϕ σ Tϕ
σ Tϕ ∧ ψ
∧T σFϕ ∧ ψ
σ Tϕ ∧F
σ F ϕ | σ Fψ
σ Tψ
σFϕ ∨ ψ
σ Tϕ ∨ ψ ∨F
∨T σFϕ
σ T ϕ | σ Tψ
σ Fψ
σFϕ → ψ
σ Tϕ → ψ →F
→T σ Tϕ
σ F ϕ | σ Tψ
σ Fψ

Table 43.1: Prefixed tableau rules for the propositional connectives

the surrounding h i, and separate the individual elements by .’s instead of ,’s.
If σ is a prefix, then σ.n is σ _ hni; e.g., if σ = 1.2.1, then σ.3 is 1.2.1.3. So for
instance,
1.2 Tϕ → ϕ

is a prefixed signed formula (or just a prefixed formula for short).


Intuitively, the prefix names a world in a model that might satisfy the for-
mulas on a branch of a tableau, and if σ names some world, then σ.n names a
world accessible from (the world named by) σ.

43.2 Rules for K


The rules for the regular propositional connectives are the same as for regu-
lar propositional signed tableaux, just with prefixes added. In each case, the
rule applied to a signed formula σ S ϕ produces new formulas that are also
prefixed by σ. This should be intuitively clear: e.g., if ϕ ∧ ψ is true at (a world
named by) σ, then ϕ and ψ are true at σ (and not at any other world). We
collect the propositional rules in ??.
The closure condition is the same as for ordinary tableaux, although we
require that not just the formulas but also the prefixes must match. So a branch
is closed if it contains both

σ Tϕ and σFϕ

for some prefix σ and formula ϕ.

Release: (None) ((None)) 575


CHAPTER 43. MODAL TABLEAUX

σ Tϕ σ F ϕ
T F
σ.n T ϕ σ.n F ϕ

σ.n is used σ.n is new

σ T♦ϕ σ F ♦ϕ
♦T ♦F
σ.n T ϕ σ.n F ϕ

σ.n is new σ.n is used

Table 43.2: The modal rules for K.

The rules for setting up assumptions is also as for ordinary tableaux, ex-
cept that for asusmptions we always use the prefix 1. (It does not matter which
prefix we use, as long as it’s the same for all assumptions.) So, e.g., we say that

ψ1 , . . . , ψn ` ϕ

iff there is a closed tableau for the assumptions

1 Tψ1 , . . . , 1 Tψn , 1 F ϕ.

For the modal operators  and ♦, the prefix of the conclusion of the rule
applied to a formula with prefix σ is σ.n. However, which n is allowed de-
pends on whether the sign is T or F.
The T rule extends a branch containing σ Tϕ by σ.n T ϕ. Similarly, the
F♦ rule extends a branch containing σ F ♦ϕ by σ.n F ϕ. They can only be ap-
plied for a prefix σ.n which already occurs on the branch in which it is applied.
Let’s call such a prefix “used” (on the branch).
The F rule extends a branch containing σ F ϕ by σ.n F ϕ. Similarly, the
T♦ rule extends a branch containing σ T♦ϕ by σ.n T ϕ. These rules, however,
can only be applied for a prefix σ.n which does not already occur on the branch
in which it is applied. We call such prefixes “new” (to the branch).
The rules are given in ??.
The requirements that the restriction that the prefix for T must be used
is necessary as otherwise we would count the following as a closed tableau:

576 Release: (None) ((None))


43.3. TABLEAUX FOR K

1. 1 T ϕ Assumption
2. 1 F ♦ϕ Assumption
3. 1.1 T ϕ T 1
4. 1.1 F ϕ ♦F 2

But ϕ 2 ♦ϕ, so our proof system would be unsound. Likewise, ♦ϕ 2 ϕ,


but without the restriction that the prefix for F must be new, this would be
a closed tableau:
1. 1 T ♦ϕ Assumption
2. 1 F ϕ Assumption
3. 1.1 T ϕ ♦T 1
4. 1.1 F ϕ F 2

43.3 Tableaux for K


Example 43.1. We give a closed tableau that shows ` (ϕ ∧ ψ) → ( ϕ ∧ ψ).

1. 1 F (ϕ ∧ ψ) → ( ϕ ∧ ψ) Assumption


2. 1 T ϕ ∧ ψ →T 1
3. 1 F ( ϕ ∧ ψ ) →T 1
4. 1 T ϕ ∧T 2
5. 1 T ψ ∧T 2
6. 1.1 F ϕ ∧ ψ F 3

7. 1.1 F ϕ 1.1 F ψ ∧F 6
8. 1.1 T ϕ 1.1 T ψ T 4; T 5
⊗ ⊗

Example 43.2. We give a closed tableau that shows ` ♦( ϕ ∨ ψ) → (♦ϕ ∨ ♦ψ):

1. 1 F ♦( ϕ ∨ ψ) → (♦ϕ ∨ ♦ψ) Assumption


2. 1 T ♦( ϕ ∨ ψ ) →T 1
3. 1 F ♦ϕ ∨ ♦ψ →T 1
4. 1 F ♦ϕ ∨F 3
5. 1 F ♦ψ ∨F 3
6. 1.1 T ϕ ∨ ψ ♦T 2

7. 1.1 T ϕ 1.1 T ψ ∨T 6
8. 1.1 F ϕ 1.1 F ψ ♦F 4; ♦F 5
⊗ ⊗

Release: (None) ((None)) 577


CHAPTER 43. MODAL TABLEAUX

43.4 Soundness

This soundness proof reuses the soundness proof for classical propo-
sitional logic, i.e., it proves everything from scratch. That’s ok if you want
a self-contained soundness proof. If you already have seen soundness for
ordinary tableau this will be repetitive. It’s planned to make it possible
to switch between self-contained version and a version building on the
non-modal case.

In order to show that prefixed tableau are sound, we have to show that if

1 Tψ1 , . . . , 1 Tψn , 1 F ϕ

has a closed tableau then ψ1 , . . . , ψn  ϕ. It is easier to prove the contraposi-


tive: if for some M and world w, M, w ψi for all i = 1, . . . , n but M, w ϕ,
then no tableau can close. Such a countermodel shows that the initial assump-
tions of the tableau are satisfiable. The strategy of the proof is to show that
whenever all the prefixed formulas on a tableau branch are satisfiable, any
application of a rule results in at least one extended branch that is also satis-
fiable. Since closed branches are unsatisfiable, any tableau for a satisfiable set
of prefixed formulas must have at least one open branch.
In order to apply this strategy in the modal case, we have to extend our
definition of “satisfiable” to modal modals and prefixes. With that in hand,
however, the proof is straightforward.

Definition 43.3. Let P be some set of prefixes, i.e., P ⊆ (Z+ )∗ \ {Λ} and let M
be a model. A function f : P → W is an interpretation of P in M if, whenever σ
and σ.n are both in P, then R f (σ ) f (σ.n).
Relative to an interpretation of prefixes P we can define:

1. M satisfies σ T ϕ iff M, f (σ ) ϕ.

2. M satisfies σ F ϕ iff M, f (σ ) 1 ϕ.

Definition 43.4. Let Γ be a set of prefixed formulas, and let P( Γ ) be the set of
prefixes that occur in it. If f is an interpretation of P( Γ ) in M, we say that M
satisfies Γ with respect to f , M, f Γ, if M satisfies every prefixed formula
in Γ with respect to f . Γ is satisfiable iff there is a model M and interpretation f
of P( Γ ) such that M, f Γ.

Proposition 43.5. If Γ contains both σ T ϕ and σ F ϕ, for some formula ϕ and pre-
fix σ, then Γ is unsatisfiable.

Proof. There cannot be a model M and interpretation f of P( Γ ) such that both


M, f (σ ) ϕ and M, f (σ ) 1 ϕ.

578 Release: (None) ((None))


43.4. SOUNDNESS

Theorem 43.6 (Soundness). If Γ has a closed tableau, Γ is unsatisfiable.

Proof. We call a branch of a tableau satisfiable iff the set of signed formulas
on it is satisfiable, and let’s call a tableau satisfiable if it contains at least one
satisfiable branch.
We show the following: Extending a satisfiable tableau by one of the rules
of inference always results in a satisfiable tableau. This will prove the theo-
rem: any closed tableau results by applying rules of inference to the tableau
consisting only of assumptions from Γ. So if Γ were satisfiable, any tableau
for it would be satisfiable. A closed tableau, however, is clearly not satisfiable,
since all its branches are closed and closed branches are unsatisfiable.
Suppose we have a satisfiable tableau, i.e., a tableau with at least one sat-
isfiable branch. Applying a rule of inference either adds signed formulas to a
branch, or splits a branch in two. If the tableau has a satisfiable branch which
is not extended by the rule application in question, it remains a satisfiable
branch in the extended tableau, so the extended tableau is satisfiable. So we
only have to consider the case where a rule is applied to a satisfiable branch.
Let Γ be the set of signed formulas on that branch, and let σ S ϕ ∈ Γ be
the signed formula to which the rule is applied. If the rule does not result in a
split branch, we have to show that the extended branch, i.e., Γ together with
the conclusions of the rule, is still satisfiable. If the rule results in split branch,
we have to show that at least one of the two resulting branches is satisfiable.
First, we consider the possible inferences with only one premise.

1. The branch is expanded by applying ¬T to σ T ¬ψ ∈ Γ. Then the ex-


tended branch contains the signed formulas Γ ∪ {σ F ψ}. Suppose M, f
Γ. In particular, M, f (σ ) ¬ψ. Thus, M, f (σ ) 1 ψ, i.e., M satisfies σ F ψ
with respect to f .

2. The branch is expanded by applying ¬F to σ F ¬ψ ∈ Γ: Exercise.

3. The branch is expanded by applying ∧T to σ Tψ ∧ χ ∈ Γ, which re-


sults in two new signed formulas on the branch: σ Tψ and σ Tχ. Sup-
pose M, f Γ, in particular M, f (σ ) ψ ∧ χ. Then M, f (σ ) ψ and
M, f (σ ) χ. This means that M satisfies both σ Tψ and σ Tχ with re-
spect to f .

4. The branch is expanded by applying ∨F to Tψ ∨ χ ∈ Γ: Exercise.

5. The branch is expanded by applying →F to σ F ψ → χ ∈ Γ: This re-


sults in two new signed formulas on the branch: σ Tψ and σ F χ. Sup-
pose M, f Γ, in particular M, f (σ ) 1 ψ → χ. Then M, f (σ ) ψ and
M, f (σ ) 1 χ. This means that M, f satisfies both σ Tψ and σ F χ.

6. The branch is expanded by applying T to σ Tψ ∈ Γ: This results in


a new signed formula σ.n Tψ on the branch, for some σ.n ∈ P( Γ ) (since

Release: (None) ((None)) 579


CHAPTER 43. MODAL TABLEAUX

σ.n must be used). Suppose M, f Γ, in particular, M, f (σ ) ψ. Since


f is an interpretation of prefixes and both σ, σ.n ∈ P( Γ ), we know that
R f (σ ) f (σ.n). Hence, M, f (σ.n) ψ, i.e., M, f satisfies σ.n Tψ.
7. The branch is expanded by applying F to σ F ψ ∈ Γ: This results in
a new signed formula σ.n F ϕ, where σ.n is a new prefix on the branch,
i.e., σ.n ∈
/ P( Γ ). Since Γ is satisfiable, there is a M and interpretation f of
P( Γ ) such that M, f  Γ, in particular M, f (σ) 1 ψ. We have to show
that Γ ∪ {σ.n F ψ} is satisfiable. To do this, we define an interpretation
of P( Γ ) ∪ {σ.n} as follows:
Since M, f (σ ) 1 ψ, there is a w ∈ W such that R f (σ )w and M, w 1 ψ.
Let f 0 be like f , except that f 0 (σ.n) = w. Since f 0 (σ ) = f (σ ) and R f (σ )w,
we have R f 0 (σ ) f 0 (σ.n), so f 0 is an interpretation of P( Γ ) ∪ {σ.n}. Ob-
viously M, f 0 (σ.n) 1 ψ. Since f (σ0 ) = f 0 (σ0 ) for all prefixes σ0 ∈ P( Γ ),
M, f 0 Γ. So, M, f 0 satisfies Γ ∪ {σ.n F ψ}.
Now let’s consider the possible inferences with two premises.
1. The branch is expanded by applying ∧F to σ F ψ ∧ χ ∈ Γ, which results
in two branches, a left one continuing through σ F ψ and a right one
through σ F χ. Suppose M, f Γ, in particular M, f (σ ) 1 ψ ∧ χ. Then
M, f (σ) 1 ψ or M, f (σ ) 1 χ. In the former case, M, f satisfies σ F ψ,
i.e., the left branch is satisfiable. In the latter, M, f satisfies σ F χ, i.e., the
right branch is satisfiable.
2. The branch is expanded by applying ∨T to Tψ ∨ χ ∈ Γ: Exercise.
3. The branch is expanded by applying →T to Tψ → χ ∈ Γ: Exercise.

Corollary 43.7. If Γ ` ϕ then Γ  ϕ.


Proof. If Γ ` ϕ then for some ψ1 , . . . , ψn ∈ Γ, ∆ = {1 F ϕ, 1 Tψ1 , . . . , 1 Tψn }
has a closed tableau. We want to show that Γ  ϕ. Suppose not, so for some
M and w, M, w ψi for i = 1, . . . , n, but M, w 1 ϕ. Let f (1) = w; then f is an
interpretation of P(∆) into M, and M satisfies ∆ with respect to f . But by ??, ∆
is unsatisfiable since it has a closed tableau, a contradiction. So we must have
Γ ` ϕ after all.
Corollary 43.8. If ` ϕ then ϕ is true in all models.

43.5 Rules for Other Accessibility Relations


In order to deal with logics determined by special accessibility relations, we
consider the additional rules in ??.
Adding these rules results in systems that are sound and complete for the
logics given in ??.

580 Release: (None) ((None))


43.5. RULES FOR OTHER ACCESSIBILITY RELATIONS

σ Tϕ σ F ♦ϕ
T T♦
σ Tϕ σFϕ

σ Tϕ σ F ♦ϕ
D D♦
σ T♦ϕ σ F ϕ

σ.n Tϕ σ.n F ♦ϕ


B B♦
σ Tϕ σFϕ

σ Tϕ σ F ♦ϕ
4 4♦
σ.n Tϕ σ.n F ♦ϕ

σ.n is used σ.n is used

σ.n Tϕ σ.n F ♦ϕ


4r 4r♦
σ Tϕ σ F ♦ϕ

Table 43.3: More modal rules.

Logic R is . . . Rules
T = KT reflexive T, T♦
D = KD serial D, D♦
K4 transitive 4, 4♦
B = KTB reflexive, T, T♦
symmetric B, B♦
S4 = KT4 reflexive, T, T♦,
transitive 4, 4♦
S5 = KT4B reflexive, T, T♦,
transitive, 4, 4♦,
euclidean 4r, 4r♦

Table 43.4: Tableau rules for various modal logics.

Release: (None) ((None)) 581


CHAPTER 43. MODAL TABLEAUX

43.6 Tableaux for Other Logics


Example 43.9. We give a closed tableau that shows S5 ` 5, i.e., ϕ → ♦ϕ.

1. 1 F ϕ → ♦ϕ Assumption
2. 1 T ϕ →F 1
3. 1 F ♦ϕ →F 1
4. 1.1 F ♦ϕ F 3
5. 1 F ♦ϕ 4r♦ 4
6. 1.1 F ϕ ♦F 5
7. 1.1 T ϕ T 2

43.7 Soundness for Additional Rules


We say a rule is sound for a class of models if, whenever a branch in a tableau
is satisfiable in a model from that class, the branch resulting from applying
the rule is also satisfiable in a model from that class.

Proposition 43.10. T and T♦ are sound for reflexive models.

Proof. 1. The branch is expanded by applying T to σ Tψ ∈ Γ: This re-


sults in a new signed formula σ Tψ on the branch. Suppose M, f Γ, in
particular, M, f (σ ) ψ. Since R is reflexive, we know that R f (σ) f (σ ).
Hence, M, f (σ ) ψ, i.e., M, f satisfies σ Tψ.

2. The branch is expanded by applying T♦ to σ F ♦ψ ∈ Γ: Exercise.

Proposition 43.11. D and D♦ are sound for serial models.

Proof. 1. The branch is expanded by applying D to σ Tψ ∈ Γ: This


results in a new signed formula σ T♦ψ on the branch. Suppose M, f
Γ, in particular, M, f (σ ) ψ. Since R is serial, there is a w ∈ W such
that R f (σ )w. Then M, w ψ, and hence M, f (σ ) ♦ψ. So, M, f satisfies
σ T♦ψ.

2. The branch is expanded by applying D♦ to σ F ♦ψ ∈ Γ: Exercise.

Proposition 43.12. B and B♦ are sound for symmetric models.

Proof. 1. The branch is expanded by applying B to σ.n Tψ ∈ Γ: This


results in a new signed formula σ Tψ on the branch. Suppose M, f Γ,
in particular, M, f (σ.n) ψ. Since f is an interpretation of prefixes on
the branch into M, we know that R f (σ ) f (σ.n). Since R is symmetric,
R f (σ.n) f (σ ). Since M, f (σ.n) ψ, M, f (σ ) ψ. Hence, M, f satisfies
σ Tψ.

582 Release: (None) ((None))


43.8. SIMPLE TABLEAUX FOR S5

2. The branch is expanded by applying B♦ to σ.n F ♦ψ ∈ Γ: Exercise.

Proposition 43.13. 4 and 4♦ are sound for transitive models.

Proof. 1. The branch is expanded by applying 4 to σ Tψ ∈ Γ: This re-


sults in a new signed formula σ.n Tψ on the branch. Suppose M, f
Γ, in particular, M, f (σ) ψ. Since f is an interpretation of prefixes on
the branch into M and σ.n must be used, we know that R f (σ ) f (σ.n).
Now let w be any world such that R f (σ.n)w. Since R is transitive,
R f (σ )w. Since M, f (σ ) ψ, M, w ψ. Hence, M, f (σ.n) ψ,
and M, f satisfies σ.n Tψ.

2. The branch is expanded by applying 4♦ to σ F ♦ψ ∈ Γ: Exercise.

Proposition 43.14. 4r and 4r♦ are sound for euclidean models.

Proof. 1. The branch is expanded by applying 4r to σ.n Tψ ∈ Γ: This re-
sults in a new signed formula σ Tψ on the branch. Suppose M, f Γ,
in particular, M, f (σ.n) ψ. Since f is an interpretation of prefixes on
the branch into M, we know that R f (σ ) f (σ.n). Now let w be any world
such that R f (σ )w. Since R is euclidean, R f (σ.n)w. Since M, f (σ ).n
ψ, M, w ψ. Hence, M, f (σ ) ψ, and M, f satisfies σ Tψ.

2. The branch is expanded by applying 4r♦ to σ.n F ♦ψ ∈ Γ: Exercise.

Corollary 43.15. The tableau systems given in ?? are sound for the respective classes
of models.

43.8 Simple Tableaux for S5


S5 is sound and complete with respect to the class of universal models, i.e.,
models where every world is accessible from every world. In universal mod-
els the accessibility relation doesn’t matter: “there is a world w where M, w
ϕ” is true if and only if there is such a w that’s accessible from u. So in S5, we
can define models as simply a set of worlds and a valuation V. This suggests
that we should be able to simplify the tableau rules as well. In the general
case, we take as prefixes sequences of positive integers, so that we can keep
track of which such prefixes name worlds which are accessible from others:
σ.n names a world accessible from σ. But in S5 any world is accessible from
any world, so there is no need to so keep track. Instead, we can use positive
integers as prefixes. The simplified rules are given in ??.

Example 43.16. We give a simplified closed tableau that shows S5 ` 5, i.e.,


♦ϕ → ♦ϕ.

Release: (None) ((None)) 583


CHAPTER 43. MODAL TABLEAUX

n Tϕ n F ϕ
T F
m Tϕ mFϕ

m is used m is new

n T♦ϕ n F ♦ϕ
♦T ♦F
m Tϕ mFϕ

m is new m is used

Table 43.5: Simplified rules for S5.

1. 1 F ♦ϕ → ♦ϕ Assumption
2. 1 T ♦ϕ →F 1
3. 1 F ♦ϕ →F 1
4. 2 F ♦ϕ F 3
5. 3T ϕ ♦T 2
6. 3F ϕ ♦F 4

43.9 Completeness for K


To show that the method of tableaux is complete, we have to show that when-
ever there is no closed tableau to show Γ ` ϕ, then Γ 2 ϕ, i.e., there is a
countermodel. But “there is no closed tableau” means that every way we
could try to construct one has to fail to close. The trick is to see that if ev-
ery such way fails to close, then a specific, systematic and exhaustive way also
fails to close. And this systematic and exhaustive way would close if a closed
tableau exists. The single tableau will contain, among its open branches, all
the information required to define a countermodel. The countermodel given
by an open branch in this tableau will contain the all the prefixes used on that
branch as the worlds, and a propositional variable p is true at σ iff σ T p occurs
on the branch.

Definition 43.17. A branch in a tableau is called complete if, whenever it con-


tains a prefixed formula σ S ϕ to which a rule can be applied, it also contains

1. the prefixed formulas that are the corresponding conclusions of the rule,
in the case of propositional stacking rules;

584 Release: (None) ((None))


43.9. COMPLETENESS FOR K

2. one of the corresponding conclusion formulas in the case of proposi-


tional branching rules;

3. at least one possible conclusion in the case of modal rules that require a
new prefix;

4. the corresponding conclusion for every prefix occurring on the branch


in the case of modal rules that require a used prefix.

For instance, a complete branch contains σ Tψ and σ Tχ whenever it con-


tains Tψ ∧ χ. If it contains σ Tψ ∨ χ it contains at least one of σ F ψ and σ Tχ.
If it contains σ F  it also contains σ.n F  for at least one n. And whenever it
contains σ T it also contains σ.n T for every n such that σ.n is used on the
branch.

Proposition 43.18. Every finite Γ has a tableau in which every branch is complete.

Proof. Consider an open branch in a tableau for Γ. There are finitely many
prefixed formulas in the branch to which a rule could be applied. In some
fixed order (say, top to bottom), for each of these prefixed formulas for which
the conditions (1)–(4) do not already hold, apply the rules that can be applied
to it to extend the branch. In some cases this will result in branching; apply
the rule at the tip of each resulting branch for all remaining prefixed formu-
las. Since the number of prefixed formulas is finite, and the number of used
prefixes on the branch is finite, this procedure eventually results in (possibly
many) branches extending the original branch. Apply the procedure to each,
and repeat. But by construction, every branch is closed.

Theorem 43.19 (Completeness). If Γ has no closed tableau, Γ is satisfiable.

Proof. By the proposition, Γ has a tableau in which every branch is complete.


Since it has no closed tableau, it thas has a tableau in which at least one branch
is open and complete. Let ∆ be the set of prefixed formulas on the branch, and
P(∆) the set of prefixes occurring in it.
We define a model M(∆) = h P(∆), R, V i where the worlds are the prefixes
occurring in ∆, the accessibility relation is given by:

Rσσ0 iff σ0 = σ.n for some n

and
V ( p) = {σ : σ T p ∈ ∆}.

We show by induction on ϕ that if σ T ϕ ∈ ∆ then M(∆), σ ϕ, and if σ F ϕ ∈ ∆


then M(∆), σ 1 ϕ.

Release: (None) ((None)) 585


CHAPTER 43. MODAL TABLEAUX

1. ϕ ≡ p: If σ T ϕ ∈ ∆ then σ ∈ V ( p) (by definition of V) and so M(∆), σ


ϕ.
If σ F ϕ ∈ ∆ then σ T ϕ ∈
/ ∆, since the branch would otherwise be closed.
So σ ∈/ V ( p) and thus M(∆), σ 1 ϕ.

2. ϕ ≡ ¬ψ: If σ T ϕ ∈ ∆, then σ F ψ ∈ ∆ since the branch is complete. By


induction hypothesis, M(∆), σ 1 ψ and thus M(∆), σ ϕ.
If σ F ϕ ∈ ∆, then σ Tψ ∈ ∆ since the branch is complete. By induction
hypothesis, M(∆), σ ψ and thus M(∆), σ 1 ϕ.

3. ϕ ≡ ψ ∧ ϕ: Exercise.

4. ϕ ≡ ψ ∨ ϕ: If σ T ϕ ∈ ∆, then either σ Tψ ∈ ∆ or σ Tχ ∈ ∆ since


the branch is complete. By induction hypothesis, either M(∆), σ ψ or
M(∆), σ χ. Thus M(∆), σ ϕ.
If σ F ϕ ∈ ∆, then both σ F ψ ∈ ∆ and σ F χ ∈ ∆ since the branch is
complete. By induction hypothesis, both M(∆), σ 1 ψ and M(∆), σ 1 ψ.
Thus M(∆), σ 1 ϕ.

5. ϕ ≡ ψ → ϕ: Exercise.

6. ϕ ≡ ψ: If σ T ϕ ∈ ∆, then, since the branch is complete, σ.n Tψ ∈ ∆


for every σ.n used on the branch, i.e., for every σ0 ∈ P(∆) such that
Rσσ0 . By induction hypothesis, M(∆), σ0 ψ for every σ0 such that
0
Rσσ . Therefore, M(∆), σ ϕ.
If σ F ϕ ∈ ∆, then for some σ.n, σ.n F ψ ∈ ∆ since the branch is complete.
By induction hypothesis, M(∆), σ.n 1 ψ. Since Rσ (σ.n), there is a σ0
such that M(∆), σ0 1 ψ. Thus M(∆), σ 1 ϕ.

7. ϕ ≡ ♦ψ: Exercise.

Since Γ ⊆ ∆, M(∆) Γ.

Corollary 43.20. If Γ  ϕ then Γ ` ϕ.

Corollary 43.21. If ϕ is true in all models, then ` ϕ.

43.10 Countermodels from Tableaux


The proof of the completeness theorem doesn’t just show that if  ϕ then
` ϕ, it also gives us a method for constructing countermodels to ϕ if 2 A. In
the case of K, this method constitutes a decision procedure. For suppose 2 ϕ.
Then the proof of ?? gives a method for constructing a complete tableau. The
method in fact always terminates. The propositional rules for K only add pre-
fixed formulas of lower complexity, i.e., each propositional rule need only be

586 Release: (None) ((None))


43.10. COUNTERMODELS FROM TABLEAUX

applied once on a branch for any signed formula σ S ϕ. New prefixes are only
generated by the F and ♦T rules, and also only have to be applied once (and
produce a single new prefix). T and ♦F have to be applied potentially mul-
tiple times, but only once per prefix, and only finitely many new prefixes are
generated. So the construction either results in a closed branch or a complete
branch after finitely many stages.
Once a tableau with an open complete branch is constructed, the proof of
?? gives us an explict model that satisfies the original set of prefixed formulas.
So not only is it the case that if Γ  ϕ, then a closed tableau exists and Γ ` ϕ, if
we look for the closed tableau in the right way and end up with a “complete”
tableau, we’ll not only know that Γ 2 ϕ but actually be able to construct a
countermodel.

Example 43.22. We know that 0 ( p ∨ q) → (p ∨ q). The construction of a


tableau begins with:

1. 1 F ( p ∨ q) → (p ∨ q) X Assumption


2. 1 T ( p ∨ q ) →F 1
3. 1 F p ∨ q X →F 1
4. 1 F p X ∨F 3
5. 1 F q X ∨F 3
6. 1.1 F p X F 4
7. 1.2 F q X F 5

The tableau is of course not finished yet. In the next step, we consider the
only line without a checkmark: the prefixed formula 1 T( p ∨ q) on line 2.
The construction of the closed tableau says to apply the T rule for every
prefix used on the branch, i.e., for both 1.1 and 1.2:

1. 1 F ( p ∨ q) → (p ∨ q) X Assumption


2. 1 T ( p ∨ q ) →F 1
3. 1 F p ∨ q X →F 1
4. 1 F p X ∨F 3
5. 1 F q X ∨F 3
6. 1.1 F p X F 4
7. 1.2 F q X F 5
8. 1.1 T p ∨ q T 2
9. 1.2 T p ∨ q T 2

Now lines 2, 8, and 9, don’t have checkmarks. But no new prefix has been
added, so we apply ∨T to lines 8 and 9, on all resulting branches (as long as
they don’t close):

Release: (None) ((None)) 587


CHAPTER 43. MODAL TABLEAUX

¬p p
1.1 q 1.2 ¬q

¬p
1 ¬q

Figure 43.1: A countermodel to ( p ∨ q) → (p ∨ q).

1. 1 F ( p ∨ q) → (p ∨ q) X Assumption


2. 1 T ( p ∨ q ) →F 1
3. 1 F p ∨ q X →F 1
4. 1 F p X ∨F 3
5. 1 F q X ∨F 3
6. 1.1 F p X F 4
7. 1.2 F q X F 5
8. 1.1 T p ∨ q X T 2
9. 1.2 T p ∨ q X T 2

10. 1.1 T p X 1.1 T q X ∨T 8



11. 1.2 T p X 1.2 T q X ∨T 9

There is one remaining open branch, and it is complete. From it we define the
model with worlds W = {1, 1.1, 1.2} (the only prefixes appearing on the open
branch), the accessibility relation R = {h1, 1.1i, h1, 1.2i}, and the assignment
V ( p) = {1.2} (because line 11 contains 1.2 T p) and V (q) = {1.1} (because
line 10 contains 1.1 Tq). The model is pictured in ??, and you can verify that it
is a countermodel to ( p ∨ q) → (p ∨ q).

Problems
Problem 43.1. Find closed tableaux in K for the following formulas:
1. ¬ p → ( p → q)

2. (p ∨ q) → ( p ∨ q)

3. ♦p → ♦( p ∨ q)
Problem 43.2. Complete the proof of ??.
Problem 43.3. Give closed tableaux that show the following:

588 Release: (None) ((None))


43.10. COUNTERMODELS FROM TABLEAUX

1. KT5 ` B;

2. KT5 ` 4;

3. KDB4 ` T;

4. KB4 ` 5;

5. KB5 ` 4;

6. KT ` D.

Problem 43.4. Complete the proof of ??

Problem 43.5. Complete the proof of ??

Problem 43.6. Complete the proof of ??

Problem 43.7. Complete the proof of ??

Problem 43.8. Complete the proof of ??

Problem 43.9. Complete the proof of ??.

Release: (None) ((None)) 589


Part X

Intuitionistic Logic

590
43.10. COUNTERMODELS FROM TABLEAUX

This is a brief introduction to intuitionistic logic produced by Zesen


Qian and revised by RZ. It is not yet well integrated with the rest of the
text and needs examples and motivations.

Release: (None) ((None)) 591


Chapter 44

Introduction

44.1 Constructive Reasoning


In constrast to extensions of classical logic by modal operators or second-order
quantifiers, intuitionistic logic is “non-classical” in that it restricts classical
logic. Classical logic is non-constructive in various ways. Intuitionistic logic
is intended to capture a more “constructive” kind of reasoning characteristic
of a kind of constructive mathematics. The following examples may serve to
illustrate some of the underlying motivations.
Suppose someone claimed that they had determined a natural number n
with the property that if n is even, the Riemann hypothesis is true, and if n
is odd, the Riemann hypothesis is false. Great news! Whether the Riemann
hypothesis is true or not is one of the big open questions of mathematics, and
they seem to have reduced the problem to one of calculation, that is, to the
determination of whether a specific number is prime or not.
What is the magic value of n? They describe it as follows: n is the natural
number that is equal to 2 if the Riemann hypothesis is true, and 3 otherwise.
Angrily, you demand your money back. From a classical point of view, the
description above does in fact determine a unique value of n; but what you
really want is a value of n that is given explicitly.
To take another, perhaps less contrived example, consider the following
question. We know that it is possible to raise an irrational number to a rational
√ 2
power, and get a rational result. For example, 2 = 2. What is less clear
is whether or not it is possible to raise an irrational number to an irrational
power, and get a rational result. The following theorem answers this in the
affirmative:

Theorem 44.1. There are irrational numbers a and b such that ab is rational.

√ 2 √
Proof. Consider 2 . If this is rational, we are done: we can let a = b = 2.

592
44.2. SYNTAX OF INTUITIONISTIC LOGIC

Otherwise, it is irrational. Then we have


√ √2 √2 √ √2· √2 √ 2
( 2 ) = 2 = 2 = 2,

√ 2 √
which is rational. So, in this case, let a be 2 , and let b be 2.

Does this constitute a valid proof? Most mathematicians feel that it does.
But again, there is something a little bit unsatisfying here: we have proved the
existence of a pair of real numbers with a certain property, without being able
to say which pair of numbers it is. It is possible to prove the √
same result, but in
such a way that the pair a, b is given in the proof: take a = 3 and b = log3 4.
Then √ log 4
ab = 3 3 = 31/2·log3 4 = (3log3 4 )1/2 = 41/2 = 2,
since 3log3 x = x.
Intuitionistic logic is designed to capture a kind of reasoning where moves
like the one in the first proof are disallowed. Proving the existence of an x
satisfying ϕ( x ) means that you have to give a specific x, and a proof that it
satisfies ϕ, like in the second proof. Proving that ϕ or ψ holds requires that
you can prove one or the other.
Formally speaking, intuitionistic logic is what you get if you restrict a
proof system for classical logic in a certain way. From the mathematical point
of view, these are just formal deductive systems, but, as already noted, they
are intended to capture a kind of mathematical reasoning. One can take this
to be the kind of reasoning that is justified on a certain philosophical view of
mathematics (such as Brouwer’s intuitionism); one can take it to be a kind of
mathematical reasoning which is more “concrete” and satisfying (along the
lines of Bishop’s constructivism); and one can argue about whether or not
the formal description captures the informal motivation. But whatever philo-
sophical positions we may hold, we can study intuitionistic logic as a formally
presented logic; and for whatever reasons, many mathematical logicians find
it interesting to do so.

44.2 Syntax of Intuitionistic Logic


The syntax of intuitionistic logic is the same as that for propositional logic. In
classical propositional logic it is possible to define connectives by others, e.g.,
one can define ϕ → ψ by ¬ ϕ ∨ ψ, or ϕ ∨ ψ by ¬(¬ ϕ ∧ ¬ψ). Thus, presentations
of classical logic often introduce some connectives as abbreviations for these
definitions. This is not so in intuitionistic logic, with two exceptions: ¬ ϕ can
be—and often is—defined as an abbreviation for ϕ → ⊥. Then, of course, ⊥
must not itself be defined! Also, ϕ ↔ ψ can be defined, as in classical logic, as
( ϕ → ψ ) ∧ ( ψ → ϕ ).
Formulas of propositional intuitionistic logic are built up from propositional
variables and the propositional constant ⊥ using logical connectives. We have:

Release: (None) ((None)) 593


CHAPTER 44. INTRODUCTION

1. A denumerable set At0 of propositional variables p0 , p1 , . . .

2. The propositional constant for falsity ⊥.

3. The logical connectives: ∧ (conjunction), ∨ (disjunction), → (conditional)

4. Punctuation marks: (, ), and the comma.

Definition 44.2 (Formula). The set Frm(L0 ) of formulas of propositional intu-


itionistic logic is defined inductively as follows:

1. ⊥ is an atomic formula.

2. Every propositional variable pi is an atomic formula.

3. If ϕ and ψ are formulas, then ( ϕ ∧ ψ) is a formula.

4. If ϕ and ψ are formulas, then ( ϕ ∨ ψ) is a formula.

5. If ϕ and ψ are formulas, then ( ϕ → ψ) is a formula.

6. Nothing else is a formula.

In addition to the primitive connectives introduced above, we also use


the following defined symbols: ¬ (negation) and ↔ (biconditional). Formulas
constructed using the defined operators are to be understood as follows:

1. ¬ ϕ abbreviates ϕ → ⊥.

2. ϕ ↔ ψ abbreviates ( ϕ → ψ) ∧ (ψ → ϕ).

Although ¬ is officially treated as an abbreviation, we will sometimes give


explicit rules and clauses in definitions for ¬ as if it were primitive. This is
mostly so we can state practice problems.

44.3 The Brouwer-Heyting-Kolmogorov Interpretation

Proofs of validity of intuitionistic propositions using the BHK inter-


pretation are confusing; they have to be explained better.

There is an informal constructive interpretation of the intuitionist connectives,


usually known as the Brouwer-Heyting-Kolmogorov interpretation. It uses
the notion of a “construction,” which you may think of as a constructive proof.
(We don’t use “proof” in the BHK interpretation so as not to get confused with
the notion of a derivation in a formal proof system.) Based on this intuitive
notion, the BHK interpretation explains the meanings of the intuitionistic con-
nectives.

594 Release: (None) ((None))


44.3. THE BROUWER-HEYTING-KOLMOGOROV INTERPRETATION

1. We assume that we know what constitutes a construction of an atomic


statement.
2. A construction of ϕ1 ∧ ϕ2 is a pair h M1 , M2 i where M1 is a construction
of ϕ1 and M2 is a construction of A2 .
3. A construction of ϕ1 ∨ ϕ2 is a pair hs, M i where s is 1 and M is a con-
struction of ϕ1 , or s is 2 and M is a construction of ϕ2 .
4. A construction of ϕ → ψ is a function that converts a construction of ϕ
into a construction of ψ.
5. There is no construction for ⊥ (absurdity).
6. ¬ ϕ is defined as synonym for ϕ → ⊥. That is, a construction of ¬ ϕ is a
function converting a construction of ϕ into a construction of ⊥.

Example 44.3. Take ¬⊥ for example. A construction of it is a function which,


given any construction of ⊥ as input, provides a construction of ⊥ as output.
Obviously, the identity function Id is such a construction: given a construc-
tion M of ⊥, Id ( M) = M yields a construction of ⊥.
Generally speaking, ¬ ϕ means “A construction of ϕ is impossible”.
Example 44.4. Let us prove ϕ → ¬¬ ϕ for any proposition ϕ, which is ϕ →
(( ϕ → ⊥) → ⊥). The construction should be a function f that, given a con-
struction M of ϕ, returns a construction f ( M) of ( ϕ → ⊥) → ⊥. Here is how f
constructs the construction of ( ϕ → ⊥) → ⊥: We have to define a function g
which, when given a construction h of ϕ → ⊥ as input, outputs a construction
of ⊥. We can define g as follows: apply the input h to the construction M of
ϕ (that we received earlier). Since the output h( M) of h is a construction of ⊥,
f ( M )(h) = h( M) is a construction of ⊥ if M is a construction of ϕ.
Example 44.5. Let us give a construction for ¬( ϕ ∧ ¬ ϕ), i.e., ( ϕ ∧ ( ϕ → ⊥)) →
⊥. This is a function f which, given as input a construction M of ϕ ∧ ( ϕ → ⊥),
yields a construction of ⊥. A construction of a conjunction ψ1 ∧ ψ2 is a pair
h N1 , N2 i where N1 is a construction of ψ1 and N2 is a construction of ψ2 . We
can define functions p1 and p2 which recover from a construction of ψ1 ∧ ψ2
the constructions of B1 and ψ2 , respectively:

p1 (h N1 , N2 i) = N1
p2 (h N1 , N2 i) = N2

Here is what f does: First it applies p1 to its input M. That yields a construc-
tion of ϕ. Then it applies p2 to M, yielding a construction of ϕ → ⊥. Such a
construction, in turn, is a function p2 ( M) which, if given as input a construc-
tion of ϕ, yields a construction of ⊥. In other words, if we apply p2 ( M ) to
p1 ( M), we get a construction of ⊥. Thus, we can define f ( M) = p2 ( p1 ( M )).

Release: (None) ((None)) 595


CHAPTER 44. INTRODUCTION

Example 44.6. Let us give a construction of (( ϕ ∧ ψ) → χ) → ( ϕ → (ψ → χ)),


i.e., a function f which turns a construction g of ( ϕ ∧ ψ) → χ into a construc-
tion of ( ϕ → (ψ → χ)). The construction g is itself a function (from construc-
tions of ϕ ∧ ψ to constructions of C). And the output f ( g) is a function h g from
constructions of ϕ to functions from constructions of ψ to constructions of χ.
Ok, this is confusing. We have to construct a certain function h g , which
will be the output of f for input g. The input of h g is a construction M of ϕ.
The output of h g ( M ) should be a function k M from constructions N of ψ to
constructions of χ. Let k g,M ( N ) = g(h M, N i). Remember that h M, N i is a con-
struction of ϕ ∧ ψ. So k g,M is a construction of ψ → χ: it maps constructions N
of ψ to constructions of χ. Now let h g ( M) = k g,M . That’s a function that
maps constructions M of ϕ to constructions k g,M of ψ → χ. Now let f ( g) = h g .
That’s a function that maps constructions g of ( ϕ ∧ ψ) → χ to constructions of
ϕ → (ψ → χ). Whew!

The statement ϕ ∨ ¬ ϕ is called the Law of Excluded Middle. We can prove


it for some specific ϕ (e.g., ⊥ ∨ ¬⊥), but not in general. This is because the
intuitionistic disjunction requires a construction of one of the disjuncts, but
there are statements which currently can neither be proved nor refuted (say,
Goldbach’s conjecture). However, you can’t refute the law of excluded middle
either: that is, ¬¬( ϕ ∨ ¬ ϕ) holds.

Example 44.7. To prove ¬¬( ϕ ∨ ¬ ϕ), we need a function f that transforms


a construction of ¬( ϕ ∨ ¬ ϕ), i.e., of ( ϕ ∨ ( ϕ → ⊥)) → ⊥, into a construction
of ⊥. In other words, we need a function f such that f ( g) is a construction
of ⊥ if g is a construction of ¬( ϕ ∨ ¬ ϕ).
Suppose g is a construction of ¬( ϕ ∨ ¬ ϕ), i.e., a function that transforms a
construction of ϕ ∨ ¬ ϕ into a construction of ⊥. A construction of ϕ ∨ ¬ ϕ is a
pair hs, M i where either s = 1 and M is a construction of ϕ, or s = 2 and M is
a construction of ¬ ϕ. Let h1 be the function mapping a construction M1 of ϕ
to a construction of ϕ ∨ ¬ ϕ: it maps M1 to h1, M2 i. And let h2 be the function
mapping a construction M2 of ¬ ϕ to a construction of ϕ ∨ ¬ ϕ: it maps M2 to
h2, M2 i.
Let k be g ◦ h1 : it is a function which, if given a construction of ϕ, returns a
construction of ⊥, i.e., it is a construction of ϕ → ⊥ or ¬ ϕ. Now let l be g ◦ h2 .
It is a function which, given a construction of ¬ ϕ, provides a construction
of ⊥. Since k is a construction of ¬ ϕ, l (k ) is a construction of ⊥.
Together, what we’ve done is describe how we can turn a construction g
of ¬( ϕ ∨ ¬ ϕ) into a construction of ⊥, i.e., the function f mapping a con-
struction g of ¬( ϕ ∨ ¬ ϕ) to the construction l (k) of ⊥ is a construction of
¬¬( ϕ ∨ ¬ ϕ).

As you can see, using the BHK interpretation to show the intuitionistic
validity of formulas quickly becomes cumbersome and confusing. Luckily,

596 Release: (None) ((None))


44.4. NATURAL DEDUCTION

there are better derivation systems for intuitionistic logic, and more precise
semantic interpretations.

44.4 Natural Deduction


Natural deduction without the ⊥C rules is a standard derivation system for
intuitionistic logic. We repeat the rules here and indicate the motivation using
the BHK interpretation. In each case, we can think of a rule which allows us
to conclude that if the premises have constructions, so does the conclusion.
Since natural deduction derivations have undischarged assumptions, we
should consider such a derivation, say, of ϕ from undischarged assumptions Γ,
as a function that turns constructions of all ψ ∈ Γ into a construction of ϕ. If
there is a derivation of ϕ from no undischarged assumptions, then there is a
construction of ϕ in the sense of the BHK interpretation. For the purpose of
the discussion, however, we’ll suppress the Γ when not needed.
An assumption ϕ by itself is a derivation of ϕ from the undischarged as-
sumption ϕ. This agrees with the BHK-interpretation: the identity function
on constructions turns any construction of ϕ into a construction of ϕ.

Conjunction

ϕ1 ϕ2 ϕ1 ∧ ϕ2
i ∈ {1, 2}
ϕ1 ∧ ϕ2 ∧Intro ϕi ∧Elimi

Suppose we have constructions N1 , N2 of ϕ1 and ϕ2 , respectively. Then we


also have a construction ϕ1 ∧ ϕ2 , namely the pair h N1 , N2 i.
A construction of ϕ1 ∧ ϕ1 on the BHK interpretation is a pair h N1 , N2 i.
So assume we have such a pair. Then we also have a construction of each
conjunct: N1 is a construction of ϕ1 and N2 is a construction of ϕ2 .

Conditional

[ ϕ]u
ϕ→ψ ϕ
ψ
→Elim
ψ
u →Intro
ϕ→ψ

If we have a derivation of ψ from undischarged assumption ϕ, then there is


a function f that turns constructions of ϕ into constructions of ψ. That same

Release: (None) ((None)) 597


CHAPTER 44. INTRODUCTION

function is a construction of ϕ → ψ. So, if the premise of →Intro has a con-


struction conditional on a construction of ϕ, the conclusion ϕ → ψ has a con-
struction.
On the other hand, suppose there are constructions N of ϕ and f of ϕ → ψ.
A construction of ϕ → ψ is a function that turns constructions of ϕ into con-
structions of ψ. So, f ( N ) is a construction of ψ, i.e., the conclusion of →Elim
has a construction.

Disjunction

[ ϕ1 ] u [ ϕ2 ] u
ϕi
∨Introi i ∈ {1, 2}
ϕ1 ∨ ϕ2
ϕ1 ∨ ϕ2 χ χ
u
χ ∨Elim

If we have a construction Ni of ϕi we can turn it into a construction hi, Ni i


of ϕ1 ∨ ϕ2 . On the other hand, suppose we have a construction of ϕ1 ∨ ϕ2 , i.e.,
a pair hi, Ni i where Ni is a construction of ϕi , and also functions f 1 , f 2 , which
turn constructions of ϕ1 , ϕ2 , respectively, into constructions of χ. Then f i ( Ni )
is a construction of χ, the conclusion of ∨Elim.

Absurdity

⊥ ⊥
ϕ I

If we have a derivation of ⊥ from undischarged assumptions ψ1 , . . . , ψn , then


there is a function f ( M1 , . . . , Mn ) that turns constructions of ψ1 , . . . , ψn into
a construction of ⊥. Since ⊥ has no construction, there cannot be any con-
structions of all of ψ1 , . . . , ψn either. Hence, f also has the property that if M1 ,
. . . , Mn are constructions of ψ1 , . . . , ψn , respectively, then f ( M1 , . . . , Mn ) is a
construction of ϕ.

Rules for ¬
Since ¬ ϕ is defined as ϕ → ⊥, we strictly speaking do not need rules for ¬.
But if we did, this is what they’d look like:

598 Release: (None) ((None))


44.4. NATURAL DEDUCTION

[ ϕ]n
¬ϕ ϕ
¬Elim


¬ ϕ ¬Intro
n

Examples of Derivations
1. ` ϕ → (¬ ϕ → ⊥), i.e., ` ϕ → (( ϕ → ⊥) → ⊥)

[ ϕ ]2 [ ϕ → ⊥]1
→Elim

1 →Intro
( ϕ → ⊥) → ⊥
2 →Intro
ϕ → ( ϕ → ⊥) → ⊥

2. ` (( ϕ ∧ ψ) → χ) → ( ϕ → (ψ → χ))

[ ϕ ]2 [ ψ ]1
∧Intro
[( ϕ ∧ ψ) → χ]3 ϕ∧ψ
χ →Elim
1 →Intro
ψ→χ
2 →Intro
ϕ → (ψ → χ)
3 →Intro
(( ϕ ∧ ψ) → χ) → ( ϕ → (ψ → χ))

3. ` ¬( ϕ ∧ ¬ ϕ), i.e., ` ( ϕ ∧ ( ϕ → ⊥)) → ⊥

[ ϕ ∧ ( ϕ → ⊥)]1 [ ϕ ∧ ( ϕ → ⊥)]1
∧Elim ∧Elim
ϕ→⊥ ϕ
→Elim

1 →Intro
( ϕ ∧ ( ϕ → ⊥)) → ⊥

4. ` ¬¬( ϕ ∨ ¬ ϕ), i.e., ` (( ϕ ∨ ( ϕ → ⊥)) → ⊥) → ⊥

[ ϕ ]1
∨Intro
[( ϕ ∨ ( ϕ → ⊥)) → ⊥]2 ϕ ∨ ( ϕ → ⊥)
→Elim

1 →Intro
ϕ→⊥
2 ∨Intro
[( ϕ ∨ ( ϕ → ⊥)) → ⊥] ϕ ∨ ( ϕ → ⊥)
→Elim

2 →Intro
(( ϕ ∨ ( ϕ → ⊥)) → ⊥) → ⊥

Release: (None) ((None)) 599


CHAPTER 44. INTRODUCTION

Proposition 44.8. If Γ ` ϕ in intuitionistic logic, Γ ` ϕ in classical logic. In


particular, if ϕ is an intuitionistic theorem, it is also a classical theorem.

Proof. Every natural deduction rule is also a rule in classical natural deduc-
tion, so every derivation in intuitionistic logic is also a derivation in classical
logic.

44.5 Axiomatic Derivations


Axiomatic derivations for intuitionistic propositional logic are the conceptu-
ally simplest, and historically first, derivation systems. They work just as in
classical propositional logic.

Definition 44.9 (Derivability). If Γ is a set of formulas of L then a derivation


from Γ is a finite sequence ϕ1 , . . . , ϕn of formulas where for each i ≤ n one of
the following holds:

1. ϕi ∈ Γ; or

2. ϕi is an axiom; or

3. ϕi follows from some ϕ j and ϕk with j < i and k < i by modus ponens,
i.e., ϕk ≡ ϕ j → ϕi .

Definition 44.10 (Axioms). The set of Ax0 of axioms for the intuitionistic propo-
sitional logic are all formulas of the following forms:

( ϕ ∧ ψ) → ϕ (44.1)
( ϕ ∧ ψ) → ψ (44.2)
ϕ → (ψ → ( ϕ ∧ ψ)) (44.3)
ϕ → ( ϕ ∨ ψ) (44.4)
ϕ → (ψ ∨ ϕ) (44.5)
( ϕ → χ) → ((ψ → χ) → (( ϕ ∨ ψ) → χ)) (44.6)
ϕ → (ψ → ϕ) (44.7)
( ϕ → (ψ → χ)) → (( ϕ → ψ) → ( ϕ → χ)) (44.8)
⊥→ϕ (44.9)

Definition 44.11 (Derivability). A formula ϕ is derivable from Γ, written Γ ` ϕ,


if there is a derivation from Γ ending in ϕ.

Definition 44.12 (Theorems). A formula ϕ is a theorem if there is a derivation


of ϕ from the empty set. We write ` ϕ if ϕ is a theorem and 0 ϕ if it is not.

Proposition 44.13. If Γ ` ϕ in intuitionistic logic, Γ ` ϕ in classical logic. In


particular, if ϕ is an intuitionistic theorem, it is also a classical theorem.

600 Release: (None) ((None))


44.5. AXIOMATIC DERIVATIONS

Proof. Every intuitionistic axiom is also a classical axiom, so every derivation


in intuitionistic logic is also a derivation in classical logic.

Problems

Release: (None) ((None)) 601


Chapter 45

Semantics

This chapter collects definitions for semantics for intuitionistic logic.


So far only Kripke and topological semantics are covered. There are no
examples yet, either of how models make formulas true or of proofs that
formulas are valid.

45.1 Introduction
No logic is satisfactorily described without a semantics, and intuitionistic logic
is no exception. Whereas for classical logic, the semantics based on valu-
ations is canonical, there are several competing semantics for intuitionistic
logic. None of them are completely satisfactory in the sense that they give an
intuitionistically acceptable account of the meanings of the connectives.
The semantics based on relational models, similar to the semantics for
modal logics, is perhaps the most popular one. In this semantics, proposi-
tional variables are assigned to worlds, and these worlds are related by an
accessibility relation. That relation is always a partial order, i.e., it is reflexive,
antisymmetric, and transitive.
Intuitively, you might think of these worlds as states of knowledge or “ev-
identiary situations.” A state w0 is accessible from w iff, for all we know, w0 is
a possible (future) state of knowledge, i.e., one that is compatible with what’s
known at w. Once a proposition is known, it can’t become un-known, i.e.,
whenever ϕ is known at w and Rww0 , ϕ is known at w0 as well. So “knowl-
edge” is monotonic with respect to the accessibility relation.
If we define “ϕ is known” as in epistemic logic as “true in all epistemic
alternatives,” then ϕ ∧ ψ is known at w if in all epistemic alternatives, both ϕ
and ψ are known. But since knowledge is monotonic and R is reflexive, that
means that ϕ ∧ ψ is known at w iff ϕ and ψ are known at w. For the same

602
45.2. RELATIONAL MODELS

reason, ϕ ∨ ψ is known at w iff at least one of them is known. So for ∧ and ∨,


the truth conditions of the connectives coincide with those in classical logic.
The truth conditions for the conditional, however, differ from classical
logic. ϕ → ψ is known at w iff at no w0 with Rww0 , ϕ is known without ψ
also being known. This is not the same as the condition that ϕ is unknown or
ψ is known at w. For if we know neither ϕ nor ψ at w, there might be a future
epistemic state w0 with Rww0 such that at w0 , ϕ is known without also coming
to know ψ.
We know ¬ ϕ only if there is no possible future epistemic state in which
we know ϕ. Here the idea is that if ϕ were knowable, then in some possible
future epistemic state ϕ becomes known. Since we can’t know ⊥, in that future
epistemic state, we would know ϕ but not know ⊥.
On this interpretation the principle of excluded middle fails. For there are
some ϕ which we don’t yet know, but which we might come to know. For
such an ϕ, both ϕ and ¬ ϕ are unknown, so ϕ ∨ ¬ ϕ is not known. But we do
know, e.g., that ¬( ϕ ∧ ¬ ϕ). For no future state in which we know both ϕ and
¬ ϕ is possible, and we know this independently of whether or not we know ϕ
or ¬ ϕ.
Relational models are not the only available semantics for intuitionistic
logic. The topological semantics is another: here propositions are interpreted
as open sets in a topological space, and the connectives are interpreted as
operations on these sets (e.g., ∧ corresponds to intersection).

45.2 Relational models


In order to give a precise semantics for intuitionistic propositional logic, we
have to give a definition of what counts as a model relative to which we can
evaluate formulas. On the basis of such a definition it is then also possible to
define semantics notions such as validity and entailment. One such semantics
is given by relational models.

Definition 45.1. A relational model for intuitionistic propositional logic is a


triple M = hW, R, V i, where

1. W is a non-empty set,

2. R is a reflexive and transitive binary relation on W, and

3. V is function assigning to each propositional variable p a subset of W,


such that

4. V is monotone with respect to R, i.e., if w ∈ V ( p) and Rww0 , then w0 ∈


V ( p ).

Definition 45.2. We define the notion of ϕ being true at w in M, M, w ϕ,


inductively as follows:

Release: (None) ((None)) 603


CHAPTER 45. SEMANTICS

1. ϕ ≡ p: M, w ϕ iff w ∈ V ( p).

2. ϕ ≡ ⊥: not M, w ϕ.

3. ϕ ≡ ¬ψ: M, w ϕ iff for no w0 such that Rww0 , M, w0 ψ.

4. ϕ ≡ ψ ∧ χ: M, w ϕ iff M, w ψ and M, w χ.

5. ϕ ≡ ψ ∨ χ: M, w ϕ iff M, w ψ or M, w χ (or both).

6. ϕ ≡ ψ → χ: M, w ϕ iff for every w0 such that Rww0 , not M, w ψ or


M, w χ (or both).

We write M, w 1 ϕ if not M, w ϕ. If Γ is a set of formulas, M, w Γ means


M, w ψ for all ψ ∈ Γ.

Proposition 45.3. Truth at worlds is monotonic with respect to R, i.e., if M, w ϕ


and Rww0 , then M, w0 ϕ.

Proof. Exercise.

45.3 Semantic Notions


Definition 45.4. We say ϕ is true in the model M = hW, R, V, w0 i, M ϕ, iff
M, w ϕ for all w ∈ W. ϕ is valid,  ϕ, iff it is true in all models. We say a
set of formulas Γ entails ϕ, Γ  ϕ, iff for every model M and every w such that
M, w Γ, M, w ϕ.

Proposition 45.5. 1. If M, w Γ and Γ  ϕ, then M, w ϕ.

2. If M Γ and Γ  ϕ, then M ϕ.

Proof. 1. Suppose M Γ. Since Γ  ϕ, we know that if M, w Γ, then


M, w ϕ. Since M, u Γ for all every u ∈ W, M, w Γ. Hence
M, w ϕ.

2. Follows immediately from ??.

45.4 Topological Semantics


Another way to provide a semantics for intuitionistic logic is using the math-
ematical concept of a topology.

Definition 45.6. Let X be a set. A topology on X is a set O ⊆ ℘( X ) that satisfies


the properties below. The elements of O are called the open sets of the topology.
The set X together with O is called a topological space.

1. The empty set and the entire space open: ∅, X ∈ O .

604 Release: (None) ((None))


45.4. TOPOLOGICAL SEMANTICS

2. Open sets are closed under finite intersections: if U, V ∈ O then U ∩ V ∈


O
3. Open sets are closed under arbitrary unions: if Ui ∈ O for all i ∈ I, then
{Ui : i ∈ I } ∈ O .
S

We may write X for a topology if the collection of open sets can be inferred
from the context; note that, still, only after X is endowed with open sets can it
be called a topology.
Definition 45.7. A topological model of intuitionistic propositional logic is a
triple X = h X, O , V i where O is a topology on X and V is a function assigning
an open set in O to each propositional variable.
Given a topological model X, we can define [ ϕ]X inductively as follows:
1. V (⊥) = ∅
2. [ p]X = V ( p)
3. [ ϕ ∧ ψ]X = [ ϕ]X ∩ [ψ]X
4. [ ϕ ∨ ψ]X = [ ϕ]X ∪ [ψ]X
5. [ ϕ → ψ]X = Int(( X \ [ ϕ]X ) ∪ [ψ]X )
Here, Int(V ) is the function that maps a set V ⊆ X to its interior, that is, the
union of all open sets it contains. In other words,
[
Int(V ) = {U : U ⊆ V and U ∈ O}.
Note that the interior of any set is always open, since it is a union of open
sets. Thus, [ ϕ]X is always an open set.
Although topological semantics is highly abstract, there are ways to think
about it that might motivate it. Suppose that the elements, or “points,” of X
are points at which statements can be evaluated. The set of all points where ϕ
is true is the proposition expressed by ϕ. Not every set of points is a potential
proposition; only the elements of O are. ϕ  ψ iff ψ is true at every point at
which ϕ is true, i.e., [ ϕ]X ⊆ [ψ]X , for all X. The absurd statement ⊥ is never
true, so [⊥]]X = ∅. How must the propositions expressed by ψ ∧ χ, ψ ∨ χ, and
ψ → χ be related to those expressed by ψ and χ for the intuitionistically valid
laws to hold, i.e., so that ϕ ` ψ iff [ ϕ]X ⊂ [ψ]X . ⊥ ` ϕ for any ϕ, and only
∅ ⊆ U for all U. Since ψ ∧ χ ` ψ, [ψ ∧ χ]X ⊆ [ψ]X , and similarly [ψ ∧ χ]X ⊆
[χ]X . The largest set satisfying W ⊆ U and W ⊆ V is U ∩ V. Conversely,
ψ ` ψ ∨ χ and χ ` ψ ∨ χ, and so [ψ]X ⊆ [ψ ∨ χ]X and [χ]X ⊆ [ψ ∨ χ]X . The
smallest set W such that U ⊆ W and V ⊆ W is U ∪ V. The definition for
→ is tricky: ϕ → ψ expresses the weakest proposition that, combined with ϕ,
entails ψ. That ϕ → ψ combined with ϕ entails ψ is clear from ( ϕ → ψ) ∧ ϕ ` ψ.
So [ ϕ → ψ]X should be the greatest open set such that [ ϕ → ψ]X ∩ [ ϕ]X ⊂ [ψ]X ,
leading to our definition.

Release: (None) ((None)) 605


CHAPTER 45. SEMANTICS

Problems
Problem 45.1. Show that according to ??, M, w ¬ ϕ iff M, w ϕ → ⊥.

Problem 45.2. Prove ??.

606 Release: (None) ((None))


Chapter 46

Soundness and Completeness

This chapter collects soundness and completeness results for propo-


sitional intuitionistic logic. It needs an introduction. The completeness
proof makes use of facts about provability that should be stated and
proved explicitly somehwere.

46.1 Soundness of Axiomatic Derivations

The soundness proof relies on the fact that all axioms are intuitionisti-
cally valid; this still needs to be proved, e.g., in the Semantics chapter.

Theorem 46.1 (Soundness). If Γ ` ϕ, then Γ  ϕ.

Proof. We prove that if Γ ` ϕ, then Γ  ϕ. The proof is by induction on


the number n of formulas in the derivation of ϕ from Γ. We show that if ϕ1 ,
. . . , ϕn = ϕ is a derivation from Γ, then Γ  ϕn . Note that if ϕ1 , . . . , ϕn is
a derivation, so is ϕ1 , . . . , ϕk for any k < n.
There are no derivations of length 0, so for n = 0 the claim holds vacuously.
So the claim holds for all derivations of length < n. We distinguish cases
according to the justification of ϕn .

1. ϕn is an axiom. All axioms are valid, so Γ  ϕn for any Γ.

2. ϕn ∈ Γ. Then for any M and w, if M, w Γ, obviously M Γϕn [w], i.e.,


Γ  ϕ.

3. ϕn follows by MP from ϕi and ϕ j ≡ ϕi → ϕn . ϕ1 , . . . , ϕi and ϕ1 , . . . , ϕ j are


derivations from Γ, so by inductive hypothesis, Γ  ϕi and Γ  ϕi → ϕn .

607
CHAPTER 46. SOUNDNESS AND COMPLETENESS

Suppose M, w Γ. Since M, w Γ and Γ  ϕi → ϕn , M, w ϕi → ϕn .


By definition, this means that for all w0 such that Rww0 , if M, w0 ϕi
then M, w0 ϕn . Since R is reflexive, w is among the w0 such that Rww0 ,
i.e., we have that if M, w ϕi then M, w ϕn . Since Γ  ϕi , M, w ϕi .
So, M, w ϕn , as we wanted to show.

46.2 Soundness of Natural Deduction


Theorem 46.2 (Soundness). If Γ ` ϕ, then Γ  ϕ.

Proof. We prove that if Γ ` ϕ, then Γ  ϕ. The proof is by induction on the


derivation of ϕ from Γ.

1. If the derivation consists of just the assumption ϕ, we have ϕ ` ϕ, and


want to show that ϕ  ϕ. Consider any model M such that M ϕ. Then
trivially M ϕ.

2. The derivation ends in ∧Intro: Exercise.

3. The derivation ends in ∧Elim: Exercise.

4. The derivation ends in ∨Intro: Suppose the premise is ψ, and the undis-
charged assumptions of the derivation ending in ψ are Γ. Then we have
Γ ` ψ and by inductive hypothesis, Γ  B. We have to show that
Γ  ψ ∨ χ. Suppose M Γ. Since Γ  ψ, M ψ. But then also
M ψ ∨ χ. Similarly, if the premise is χ, we have that Γ  χ.

5. The derivation ends in ∨Elim: The derivations ending in the premises


are of ψ ∨ χ from undischarged assumptions Γ, of θ from undischarged
assumptions ∆ 1 ∪ {ψ}, and of θ from undischarged assumptions ∆ 2 ∪
{χ}. So we have Γ ` ψ ∨ χ, ∆ 1 ∪ {ψ} ` θ, and ∆ 2 ∪ {χ} ` θ. By
induction hypothesis, Γ  ψ ∨ χ, ∆ 1 ∪ {ψ}  θ, and ∆ 2 ∪ {χ}  θ. We
have to prove that Γ ∪ ∆ 1 ∪ ∆ 2  θ.
Suppose M Γ ∪ ∆ 1 ∪ ∆ 2 . Then M Γ and since Γ  ψ ∨ χ, M ψ ∨ χ.
By definition of M , either M ψ or M χ. So we distinguish cases:
(a) M ψ. Then M ∆ 1 ∪ {ψ}. Since ∆ 1 ∪ ψ  θ, we have M θ. (b)
M χ. Then M ∆ 2 ∪ {χ}. Since ∆ 2 ∪ χ  θ, we have M θ. So in
either case, M θ, as we wanted to show.

6. The derivation ends with →Intro concluding ψ → χ. Then the premise


is χ, and the derivation ending in the premise has undischarged assump-
tions Γ ∪ {ψ}. So we have that Γ ∪ {ψ} ` χ, and by induction hypothesis
that Γ ∪ {ψ}  χ. We have to show that Γ  ψ → χ.

608 Release: (None) ((None))


46.3. LINDENBAUM’S LEMMA

Suppose M, w Γ. We want to show that that for all w0 such that Rww0 ,
if M, w0 ψ, then M, w0 χ. So assume that Rww0 and M, w0 ψ. By
??, M, w0 Γ. Since Γ ∪ {ψ}  χ, M, w0 χ, which is what we wanted
to show.

7. The derivation ends in →Elim and conclusion χ. The premises are ψ → χ


amd ψ, with derivations from undischarged assumptions Γ, ∆. So we
have Γ ` ψ → χ and ∆ ` ψ. By inductive hypothesis, Γ  ψ → χ and
∆  ψ. We have to show that Γ ∪ ∆  χ.
Suppose M, w Γ ∪ ∆. Since M, w Γ and Γ  ψ → χ, M, w ψ → χ.
By definition, this means that for all w0 such that Rww0 , if M, w0 ψ
then M, w0 χ. Since R is reflexive, w is among the w0 such that Rww0 ,
i.e., we have that if M, w ψ then M, w χ. Since M, w ∆ and ∆  ψ,
M, w ψ. So, M, w χ, as we wanted to show.

8. The derivation ends in ⊥ I , concluding ϕ. The premise is ⊥ and the


undischarged assumptions of the derivation of the premise are Γ. Then
Γ ` ⊥. By inductive hypothesis, Γ  ⊥. We have to show Γ  ϕ.
We proceed indirectly. If Γ 2 ϕ there is a model M and world w such that
M, w Γ and M, w 1 ϕ. Since Γ  ⊥, M, w ⊥. But that’s impossible,
since by definition, M, w 1 ⊥. So Γ  ϕ.

9. The derivation endss in ¬Intro: Exercise.

10. The derivation ends in ¬Elim: Exercise.

46.3 Lindenbaum’s Lemma


Definition 46.3. A set of formulas Γ is prime iff
1. Γ is consistent.

2. If Γ ` ϕ then ϕ ∈ Γ, and

3. If ϕ ∨ ψ ∈ Γ then ϕ ∈ Γ or ψ ∈ Γ.
Lemma 46.4 (Lindenbaum’s Lemma). If Γ 0 ϕ, there is a Γ ∗ ⊇ Γ such that Γ ∗ is
prime and Γ ∗ 0 ϕ.

Proof. Let ψ1 ∨ χ1 , ψ2 ∨ χ2 , . . . , be an enumeration of all formulas of the form ψ ∨


χ. We’ll define an increasing sequence of sets of formulas Γn , where each Γn+1
is defined as Γn together with one new formula. Γ ∗ will be the union of all Γn .
The new formulas are selected so as to ensure that Γ ∗ is prime and still Γ ∗ 0 ϕ.
This means that at each step we should find the first disjunction ψi ∨ χi such
that:

Release: (None) ((None)) 609


CHAPTER 46. SOUNDNESS AND COMPLETENESS

1. Γn ` ψi ∨ χi

2. ψi ∈
/ Γn and χi ∈
/ Γn

We add to Γn either ψi if Γn ∪ {ψi } 0 ϕ, or χi otherwise. We’ll have to show


that this works. For now, let’s define i (n) as the least i such that (1) and (2)
hold.
Define Γ0 = Γ and
(
Γn ∪ {ψi(n) } if Γn ∪ {ψi(n) } 0 ϕ
Γn+1 =
Γn ∪ {χi(n) } otherwise

If i (n) is undefined, i.e., whenever Γ ` ψ ∨ χ, either ψ ∈ γn or χ ∈ Γn , we let


Γn+1 = Γn . Now let Γ ∗ = ∞
S
n=0 Γn
First we show that for all n, Γn 0 ϕ. We proceed by induction on n. For
n = 0 the claim holds by the hypothesis of the theorem, i.e., Γ 0 ϕ. If n > 0,
we have to show that if Γn 0 ϕ then Γn+1 0 ϕ. If i (n) is undefined, Γn+1 = Γn
and there is nothing to prove. So suppose i (n) is defined. For simplicity, let
i = i ( n ).
We’ll prove the contrapositive of the claim. Suppose Γn+1 ` ϕ. By con-
struction, Γn+1 = Γn ∪ {ψi } if Γn ∪ {ψi } 0 ϕ, or else Γn+1 = Γn ∪ {χi }. It
clearly can’t be the first, since then Γn+1 0 ϕ. Hence, Γn ∪ {ψi } ` ϕ and
Γn+1 = Γn ∪ {χi }. By definition of i (n), we have that Γn ` ψi ∨ χi . We have
Γn ∪ {ψi } ` ϕ. We also have Γn+1 = Γn ∪ {χi } ` ϕ. Hence, Γn ` ϕ, which is
what we wanted to show.
If Γ ∗ ` ϕ, there would be some finite subset Γ 0 ⊆ Γ ∗ such that Γ 0 ` ϕ. Each
θ ∈ Γ 0 must be in Γi for some i. Let n be the largest of these. Since Γi ⊆ Γn if
i ≤ n, Γ 0 ⊆ Γn . But then Γn ` ϕ, contrary to our proof above that Γn 0 ϕ.
Lastly, we show that Γ ∗ is prime, i.e., satisfies conditions ??, ??, and ?? of
??.
First, Γ ∗ 0 ϕ, so Γ ∗ is consistent, so ?? holds.
We now show that if Γ ∗ ` ψ ∨ χ, then either ψ ∈ Γ ∗ or χ ∈ Γ ∗ . This
proves ??, since if ψ ∈ Γ ∗ then also Γ ∗ ` ψ, and similarly for χ. So assume
Γ ∗ ` ψ ∨ χ but ψ ∈ / Γ ∗ and χ ∈ / Γ ∗ . Since Γ ∗ ` ψ ∨ χ, Γn ` ψ ∨ χ for some n.
ψ ∨ χ appears on the enumeration of all disjunctions, say as ψj ∨ χ j . ψj ∨ χ j
satisfies the properties in the definition of i (n), namely we have Γn ` ψj ∨ χ j ,
while ψj ∈ / Γn and χ j ∈/ Γn . At each stage, at least one fewer disjunction ψi ∨ χi
satisfies the conditions (since at each stage we add either ψi or χi ), so at some
stage m we will have j = i ( Γm ). But then either ψ ∈ Γm+1 or χ ∈ Γm+1 ,
contrary to the assumption that ψ ∈ / Γ ∗ and χ ∈/ Γ∗ .
∗ ∗
Now suppose Γ ` ϕ. Then Γ ` ϕ ∨ ϕ. But we’ve just proved that if
Γ ∗ ` ϕ ∨ ϕ then ϕ ∈ Γ ∗ . Hence, Γ ∗ satisfies ?? of ??.

610 Release: (None) ((None))


46.4. THE CANONICAL MODEL

46.4 The Canonical Model


The worls in our model will be finite sequences σ of natural numbers, i.e.,
σ ∈ N∗ . Note that N∗ is inductively defined by:

1. Λ ∈ N∗ .

2. If σ ∈ N∗ and n ∈ Σ, then σ.n ∈ N∗ (where σ.n is σ _ hni).

3. Nothing else is in N∗ .

So we can use N∗ to give inductive definitions.


Let hψ1 , χ1 i, hψ2 , χs i, . . . , be an enumeration of all pairs of formulas. Given
a set of formulas ∆, define ∆(σ ) by induction as follows:

1. ∆(Λ) = ∆

2. ∆(σ.n) = (
(∆(σ) ∪ {ψn })∗ if ∆(σ ) ∪ {ψn } 0 χn
∆(σ ) otherwise

Here by (∆(σ ) ∪ {ψn })∗ we mean the prime set of formulas which exists by ??
applied to the set ∆(σ ) ∪ {ψn }. Note that by this definition, if ∆(σ ) ∪ {ψn } 0
χn , then ∆(σ.n) ` ψn and ∆(σ.n) 0 χn . Note also that ∆(σ ) ⊆ ∆(σ.n) for any n.
If ∆ is prime, then ∆(σ ) is prime for all σ.

Definition 46.5. Suppose ∆ is prime. Then the canonical model for ∆ is defined
by:

1. W = N∗ , the set of finite sequences of natural numbers.

2. R is the partial order according to which Rσσ0 iff σ is an initial segment


of σ0 (i.e., σ0 = σ _ σ00 for some sequence σ00 ).

3. V ( p) = {σ : p ∈ ∆(σ )}.

It is easy to verify that R is indeed a partial order. Also, the monotonic-


ity condition on V is satisfied. Since ∆(σ ) ⊆ ∆(σ.n) we get ∆(σ ) ⊆ ∆(σ0 )
whenever Rσσ0 by induction on σ.

46.5 The Truth Lemma


Lemma 46.6. If ∆ is prime, then M(∆), σ ϕ iff ∆(σ ) ` ϕ.

Proof. By induction on ϕ.

1. ϕ ≡ ⊥: Since ∆(σ) is prime, it is consistent, so ∆(σ ) 0 ϕ. By definition,


M(∆), σ 1 ϕ.

Release: (None) ((None)) 611


CHAPTER 46. SOUNDNESS AND COMPLETENESS

2. ϕ ≡ p: By definition of , M(∆), σ ϕ iff σ ∈ V ( p), i.e., ∆(σ ) ` ϕ.

3. ϕ ≡ ¬ψ: exercise.

4. ϕ ≡ ψ ∧ χ: M(∆), σ ϕ iff M(∆), σ ψ and M(∆), σ χ. By induction


hypothesis, M(∆), σ ψ iff ∆(σ ) ` ψ, and similarly for χ. But ∆(σ) ` ψ
and ∆(σ) ` χ iff ∆(σ ) ` ϕ.

5. ϕ ≡ ψ ∨ χ: M(∆), σ ϕ iff M(∆), σ ψ or M(∆), σ χ. By induction


hypothesis, this holds iff ∆(σ ) ` ψ of ∆(σ ) ` χ. We have to show that
this in turn holds iff ∆(σ ) ` ϕ. The left-to-right direction is clear. The
right-to-left direction follows since ∆(σ ) is prime.

6. ϕ ≡ ψ → χ: First the contrapositive of the left-to-right direction: As-


sume ∆(σ ) 0 ψ → χ. Then also Γ ∗ (σ ) ∪ {ψ} 0 χ. Since hψ, χi is hψn , χn i
for some n, we have ∆(σ.n) = (∆(σ ) ∪ {ψ})∗ , and ∆(σ.n) ` ψ but 0 χ. By
inductive hypothesis, M(∆), σ.n ψ and M(∆), σ.n 1 χ. Since Rσ (σ.n),
this means that M(∆), σ ϕ.
Now assume ∆(σ ) ` ψ → χ, and let Rσσ0 . Since ∆(σ ) ⊆ ∆(σ0 ), we have:
if ∆(σ0 ) ` ψ, then ∆(σ0 ) ` χ. In other words, for every σ0 such that Rσσ0 ,
either ∆(σ0 ) 0 ψ or ∆(σ0 ) ` χ. By induction hypothesis, this means that
whenever Rσσ0 , either M(∆), σ0 1 ψ or M(∆), σ0 χ, i.e., M(∆), σ ϕ.

46.6 The Completeness Theorem


Theorem 46.7. If Γ  ϕ then Γ ` ϕ.

Proof. We prove the contrapositive: Suppose Γ 0 ϕ. Then by ??, there is a


prime set Γ ∗ ⊇ Γ such that Γ ∗ 0 ϕ. Consider the canonical model M( Γ ∗ ) for
Γ ∗ as defined in ??. For any ψ ∈ Γ, Γ ∗ ` ψ. Note that Γ ∗ (Λ) = Γ ∗. By the
Truth Lemma (??), we have M( Γ ∗ ), Λ ψ for all ψ ∈ Γ and M( Γ ∗ ), Λ 1 ϕ.
This shows that Γ 2 ϕ.

Problems
Problem 46.1. Complete the proof of ??. For the cases for ¬Intro and ¬Elim,
use the definition of M, w ¬ ϕ in ??, i.e., don’t treat ¬ ϕ as defined by ϕ → ⊥.

612 Release: (None) ((None))


Chapter 47

Propositions as Types

This is a very experimental draft of a chapter on the Curry-Howard cor-


respondence. It needs more explanation and motivation, and there are
probably errors and omissions. The proof of normalization should be re-
viewed and expanded. There are no examples for the product type. Per-
muation and simplification conversions are not covered. It will make a
lot more sense once there is also material on the (typed) lambda calculus
which is basically presupposed here. Use with extreme caution.

47.1 Introduction
Historically the lambda calculus and intuitionistic logic were developed sepa-
rately. Haskell Curry and William Howard independently discovered a close
similarity: types in a typed lambda calculus correspond to formulas in intu-
itionistic logic in such a way that a derivation of a formula corresponds di-
rectly to a typed lambda term with that formula as its type. Moreover, beta re-
duction in the typed lambda calculus corresponds to certain transformations
of derivations.
For instance, a derivation of ϕ → ψ corresponds to a term λx ϕ . N ψ , which
has the function type ϕ → ψ. The inference rules of natural deduction corre-
spond to typing rules in the typed lambda calculus, e.g.,
[ ϕ] x
x:ϕ ⇒ N:ψ
corresponds to λ
⇒ λx ϕ . N ψ : ϕ → ψ
ψ
x →Intro
ϕ→ψ
where the rule on the right means that if x is of type ϕ and N is of type ψ, then
λx ϕ . N is of type ϕ → ψ.

613
CHAPTER 47. PROPOSITIONS AS TYPES

The →Elim rule corresponds to the typing rule for composition terms, i.e.,

ϕ→ψ ϕ
ψ
→Elim corresponds to
⇒ P : ϕ→ψ ⇒ Q:ϕ
app
⇒ P ϕ → ψ ϕ
Q :ψ

If a →Intro rule is followed immediately by a →Elim rule, the derivation


can be simplified:

[ ϕ] x

ϕ
.1
ψ
x →Intro
ϕ→ψ ϕ
ψ
→Elim
ψ

which corresponds to the beta reduction of lambda terms

(λx ϕ . Pψ ) Q .1 P[ Q/x ].

Similar correspondences hold between the rules for ∧ and “product” types,
and between the rules for ∨ and “sum” types.
This correspondence between terms in the simply typed lambda calculus
and natural deduction derivations is called the “Curry-Howard”, or “propo-
sitions as types” correspondence. In addition to formulas (propositions) cor-
responding to types, and proofs to terms, we can summarize the correspon-
dences as follows:

logic program
proposition type
proof term
assumption variable
discharged assumption bind variable
not discharged assumption free variable
implication function type
conjunction product type
disjunction sum type
absurdity bottom type

The Curry-Howard correspondence is one of the cornerstones of auto-


mated proof assistants and type checkers for programs, since checking a proof
witnessing a proposition (as we did above) amounts to checking if a program
(term) has the declared type.

614 Release: (None) ((None))


47.2. SEQUENT NATURAL DEDUCTION

47.2 Sequent Natural Deduction


Let us write Γ ⇒ ϕ if there is a natural deduction derivation with Γ as undis-
charged assumptions and ϕ as conclusion; or ⇒ ϕ if Γ is empty.
We write Γ, ϕ1 , . . . , ϕn for Γ ∪ { ϕ1 , . . . , ϕn }, and Γ, ∆ for Γ ∪ ∆.
Observe that when we have Γ ⇒ ϕ ∧ ϕ, meaning we have a derivation
with Γ as undischarged assumptions and ϕ ∧ ϕ as end-formula, then by ap-
plying ∧Elim at the bottom, we can get a derivation with the same undis-
charged assumptions and ϕ as conclusion. In other words, if Γ ⇒ ϕ ∧ ψ, then
Γ ⇒ ϕ.

Γ ⇒ ϕ∧ψ Γ ⇒ ϕ∧ψ
∧Elim ∧Elim
Γ ⇒ ϕ Γ ⇒ ψ
The label ∧Elim hints at the relation with the rule of the same name in natural
deduction.
Likewise, suppose we have Γ, ϕ ⇒ ψ, meaning we have a derivation with
undischarged assumptions Γ, ϕ and end-formula ψ. If we apply the →Intro
rule, we have a derivation with Γ as undischarged assumptions and ϕ → ψ as
the end-formula, i.e., Γ ⇒ ϕ → ψ. Note how this has made the discharge of
assumptions more explicit.

Γ, ϕ ⇒ ψ
→Intro
Γ ⇒ ϕ→ψ
We can draw conclusions from other rules in the same fashion, which is
spelled out as follows:

Γ ⇒ ϕ ∆ ⇒ ψ
∧Intro
Γ, ∆ ⇒ ϕ ∧ ψ
Γ ⇒ ϕ∧ψ Γ ⇒ ϕ∧ψ
∧Elim1 ∧Elim2
Γ ⇒ ϕ Γ ⇒ ψ
Γ ⇒ ϕ Γ ⇒ ψ
∨Intro1 ∨Intro2
Γ ⇒ ϕ∨ψ Γ ⇒ ϕ∨ψ
Γ ⇒ ϕ∨ψ ∆, ϕ ⇒ χ ∆0 , ψ ⇒ χ
∨Elim
Γ, ∆, ∆0 ⇒ χ
Γ, ϕ ⇒ ψ ∆ ⇒ ϕ→ψ Γ ⇒ ϕ
→Intro →Elim
Γ ⇒ ϕ→ψ Γ, ∆ ⇒ ψ
Γ ⇒ ⊥ ⊥
I
Γ ⇒ ϕ

Any assumption by itself is a derivation of ϕ from ϕ, i.e., we always have


ϕ ⇒ ϕ.

ϕ ⇒ ϕ

Release: (None) ((None)) 615


CHAPTER 47. PROPOSITIONS AS TYPES

Together, these rules can be taken as a calculus about what natural deduc-
tion derivations exist. They can also be taken as a notational variant of natural
deduction, in which each step records not only the formula derived but also
the undischarged assumptions from which it was derived.

ϕ ⇒ ϕ
ϕ ⇒ ϕ ∨ ( ϕ → ⊥) ψ ⇒ ψ
ϕ, ψ→ ⇒ ⊥
(ψ ⇒ ϕ → ⊥
(ψ ⇒ ϕ ∨ ( ϕ → ⊥) (ψ ⇒ ψ
(ψ ⇒ ⊥
⇒ ψ→⊥

where ψ is short for ( ϕ ∨ ( ϕ → ⊥)) → ⊥.

47.3 Proof Terms


We give the definition of proof terms, and then establish its relation with nat-
ural deduction derivations.

Definition 47.1 (Proof terms). Proof terms are inductively generated by the
following rules:

1. A single variable x is a proof term.

2. If P and Q are proof terms, then PQ is also a proof term.

3. If x is a variable, ϕ is a formula, and N is a proof term, then λx ϕ . N is


also a proof term.

4. If P and Q are proof terms, then h P, Qi is a proof term.

5. If M is a proof term, then pi ( M) is also a proof term, where i is 1 or 2.


ϕ
6. If M is a proof term, and ϕ is a formula, then ini ( M ) is a proof term,
where i is 1 or 2.

7. If M, N1 , N2 is proof terms, and x1 , x2 are variables, then case( M, x1 .N1 , x2 .N2 )


is a proof term.

8. If M is a proof term and ϕ is a formula, then contr ϕ ( M ) is proof term.

Each of the above rules corresponds to an inference rule in natural deduc-


tion. Thus we can inductively assign proof terms to the formulas in a deriva-
tion. To make this assignment unique, we must distinguish between the two
versions of ∧Elim and of ∨Intro. For instance, the proof terms assigned to
the conclusion of ∨Intro must carry the information whether ϕ ∨ ψ is inferred

616 Release: (None) ((None))


47.4. CONVERTING DERIVATIONS TO PROOF TERMS

from ϕ or from ψ. Suppose M is the term assigned to ϕfrom which ϕ ∨ ψ is


ϕ
inferred. Then the proof term assigned to ϕ ∨ ψ is in1 ( M ). If we instead infer
ϕ
ψ ∨ ϕ then the proof term assigned is in2 ( M).
The term λx ϕ . N is assigned to the conclusion of →Intro. The ϕ represents
the assumption being discharged; only have we included it can we infer the
formula of λx ϕ . N based on the formula of N.

Definition 47.2 (Typing context). A typing context is a mapping from variables


to formulas. We will call it simply the “context” if there is no confusion. We
write a context Γ as a set of pairs h x, ϕi.

A pair Γ ⇒ M where M is a proof term represents a derivation of a formula


with context Γ.

Definition 47.3 (Typing pair). A typing pair is a pair h Γ, Mi, where Γ is a typ-
ing context and M is a proof term.

Since in general terms only make sense with specific contexts, we will
speak simply of “terms” from now on instead of “typing pair”; and it will
be apparent when we are talking about the literal term M.

47.4 Converting Derivations to Proof Terms


We will describe the process of converting natural deduction derivations to
pairs. We will write a proof term to the left of each formula in the derivation,
resulting in expressions of the form M : ϕ. We’ll then say that, M witnesses ϕ.
Let’s call such an expression a judgment.
First let us assign to each assumption a variable, with the following con-
straints:

1. Assumptions discharged in the same step (that is, with the same number
on the square bracket) must be assigned the same variable.

2. For assumptions not discharged, assumptions of different formulas should


be assigned different variables.

Such an assignment translates all assumptions of the form

ϕ into x : ϕ.

With assumptions all associated with variables (which are terms), we can now
inductively translate the rest of the deduction tree. The modified natural de-
duction rules taking into account context and proof terms are given below.
Given the proof terms for the premise(s), we obtain the corresponding proof
term for conclusion.

Release: (None) ((None)) 617


CHAPTER 47. PROPOSITIONS AS TYPES

M1 : ϕ1 M2 : ϕ2
∧Intro
h M1 , M2 i : ϕ1 ∧ ϕ2
M : ϕ1 ∧ ϕ2 M : ϕ1 ∧ ϕ2
∧Elim1 ∧Elim2
pi ( M ) : ϕ 1 pi ( M ) : ϕ 2
In ∧Intro we assume we have ϕ1 witnessed by term M1 and ϕ2 witnessed
by term M2 . We pack up the two terms into a pair h M1 , M2 i which witnesses
ϕ1 ∧ ϕ2 .
In ∧Elimi we assume that M witnesses ϕ1 ∧ ϕ2 . The term witnessing ϕi
is pi ( M). Note that M is not necessary of the form h M1 , M2 i, so we cannot
simply assign M1 to the conclusion ϕi .
Note how this coincides with the BHK interpretation. What the BHK in-
terpretation does not specify is how the function used as proof for ϕ → ψ is
supposed to be obtained. If we think of proof terms as proofs or functions of
proofs, we can be more explicit.
[ x : ϕ]

P : ϕ→ψ Q:ϕ
→Elim
PQ : ψ
N:ψ
→Intro
λx ϕ . N : ϕ → ψ
The λ notation should be understood as the same as in the lambda calculus,
and PQ means applying P to Q.

M1 : ϕ1 M2 : ϕ2
ϕ1 ∨Intro1 ϕ2 ∨Intro2
in1 ( M1 ) : ϕ1 ∨ ϕ2 in2 ( M2 ) : ϕ1 ∨ ϕ2
[ x1 : ϕ1 ] [ x2 : ϕ2 ]

M : A1 ∨ ϕ2 N1 : χ N2 : χ
∨Elim
case( M, x1 .N1 , x2 .N2 ) : χ
ϕ
The proof term in1 1 ( M1 ) is a term witnessing ϕ1 ∨ ϕ2 , where M1 witnesses
ϕ1 .
The term case( M, x1 .N1 , x2 .N2 ) mimics the case clause in programming
languages: we already have the derivation of ϕ ∨ ψ, a derivation of χ assum-
ing ϕ, and a derivation of χ assuming ψ. The case operator thus select the
appropriate proof depending on M; either way it’s a proof of χ.

N:⊥ ⊥I
contr ϕ ( N ) : ϕ

618 Release: (None) ((None))


47.4. CONVERTING DERIVATIONS TO PROOF TERMS

contr ϕ ( N ) is a term witnessing ϕ, whenever N is a term witnessing ⊥.


Now we have a natural deduction derivation with all formulas associated
with a term. At each step, the relevant typing context Γ is given by the list of
assumptions remaining undischarged at that step. Note that Γ is well defined:
since we have forbidden assumptions of different undischarged assumptions
to be assigned the same variable, there won’t be any disagreement about the
formulas mapped to which a variable is mapped.
We now give some examples of such translations:
Consider the derivation of ¬¬( ϕ ∨ ¬ ϕ), i.e., (( ϕ ∨ ( ϕ → ⊥)) → ⊥) → ⊥.
Its translation is:

[ x : ϕ ]1
ϕ→⊥
[y : ( ϕ ∨ ( ϕ → ⊥)) → ⊥]2 in1 ( x ) : ϕ ∨ ( ϕ → ⊥)
ϕ→⊥
y(in1 ( x )) : ⊥
1
ϕ→⊥
ϕ
λx . y(in1 ( x )) : ϕ → ⊥
ϕ ϕ→⊥
[y : ( ϕ ∨ ( ϕ → ⊥)) → ⊥]2 in2 (λx ϕ . y(in1 ( x ))) : ϕ ∨ ( ϕ → ⊥)
ϕ ϕ →⊥
y(in2 (λx ϕ . yin1 ( x ))) : ⊥
2
→⊥
λy( ϕ∨( ϕ→⊥))→⊥ . y(in2 (λx ϕ . yin1
ϕ ϕ
( x ))) : (( ϕ ∨ ( ϕ → ⊥)) → ⊥) → ⊥
The tree has no assumptions, so the context is empty; we get:
ϕ→⊥
` λy( ϕ∨( ϕ→⊥))→⊥ . y(in2 (λx ϕ . yin1
ϕ
( x ))) : (( ϕ ∨ ( ϕ → ⊥)) → ⊥) → ⊥
If we leave out the last →Intro, the assumptions denoted by y would be in the
context and we would get:
ϕ ϕ→⊥
y : (( ϕ ∨ ( ϕ → ⊥)) → ⊥) ` y(in2 (λx ϕ . yin1 ( x ))) : ⊥
Another example: ` ϕ → ( ϕ → ⊥) → ⊥

[ x : ϕ ]2 [y : ϕ → ⊥]1
yx : ⊥
1
λy ϕ→⊥ . yx : ( ϕ → ⊥) → ⊥
2
λx ϕ . λy ϕ→⊥ . yx : ϕ → ( ϕ → ⊥) → ⊥
Again all assumptions are discharged and thus the context is empty, the re-
sulting term is
` λx ϕ . λy ϕ→⊥ . yx : ϕ → ( ϕ → ⊥) → ⊥
If we leave out the last two →Intro inferences, the assumptions denoted by
both x and y would be in context and we would get
x : ϕ, y : ϕ → ⊥ ` yx : ⊥

Release: (None) ((None)) 619


CHAPTER 47. PROPOSITIONS AS TYPES

47.5 Recovering Derivations from Proof Terms


Now let us consider the other direction: translating terms back to natural de-
duction trees. We will use still use the double refutation of the excluded mid-
dle as example, and let S denote this term, i.e.,
ϕ→⊥
λy( ϕ∨( ϕ→⊥))→⊥ . y(in2 (λx ϕ . yin1
ϕ
( x ))) : (( ϕ ∨ ( ϕ → ⊥)) → ⊥) → ⊥

For each natural deduction rule, the term in the conclusion is always formed
by wrapping some operator around the terms assigned to the premise(s). Rules
correspond uniquely to such operators. For example, from the structure of
the S we infer that the last rule applied must be →Intro, since it is of the form
λy... . . . ., and the λ operator corresponds to →Intro. In general we can recover
the skeleton of the derivation solely by the structure of the term, e.g.,

[ x ]1
ϕ→⊥
∨Intro1
[ y : ]2 in1 (x) :
ϕ→⊥
→Elim
y(in1 ( x )) :
1
ϕ→⊥
→Intro
λx ϕ . y(in1 ( x )) :
ϕ ϕ→⊥
∨Intro2
[ y : ]2 in2 (λx ϕ . yin1 ( x )) :
ϕ ϕ→⊥
→Elim
y(in2 (λx ϕ . yin1 ( x ))) :
2
ϕ→⊥
→Intro
λy( ϕ∨( ϕ→⊥))→⊥ . y(in2 (λx ϕ . y(in1
ϕ
( x )))) :

Our next step is to recover the formulas these terms witness. We define a
function F ( Γ, M) which denotes the formula witnessed by M in context Γ, by
induction on M as follows:

F ( Γ, x ) = Γ ( x )
F ( Γ, h N1 , N2 i = F ( Γ, N1 ) ∧ F ( Γ, N2 )
F ( Γ, pi ( N ) = ϕi if F ( Γ, N ) = ϕ1 ∧ ϕ2
(
ϕ F ( N ) ∨ ϕ if i = 1
F ( Γ, ini ( N ) =
ϕ ∨ F ( N ) if i = 2
F ( Γ, case( M, x1 .N1 , x2 .N2 )) = F ( Γ ∪ { xi : F ( Γ, M)}, Ni )
F ( Γ, λx ϕ . N ) = ϕ → F ( Γ ∪ { x : ϕ}, N )
F ( Γ, N M ) = ψ if F ( Γ, N ) = ϕ → ψ

where Γ ( x ) means the formula mapped to by x in Γ and Γ ∪ { x : ϕ} is a


context exactly as Γ except mapping x to ϕ, whether or not x is already in Γ.
Note there are cases where F ( Γ, M) is not defined, for example:

1. In the first line, it is possible that x is not in Γ.

620 Release: (None) ((None))


47.5. RECOVERING DERIVATIONS FROM PROOF TERMS

2. In recursive cases, the inner invocation may be undefined, making the


outer one undefined too.

3. In the third line, its only defined when F ( Γ, M) is of the form ϕ1 ∨ ϕ2 ,


and the right hand is independent on i.

As we recursively compute F ( Γ, M), we work our way up the natural


deduction derivation. The every step in the computation of F ( Γ, M ) corre-
sponds to a term in the derivation to which the derivation-to-term translation
assigns M, and the formula computed is the end-formula of the derivation.
However, the result may not be defined for some choices of Γ. We say that
such pairs h Γ, M i are ill-typed, and otherwise well-typed. However, if the term
M results from translating a derivation, and the formulas in Γ correspond to
the undischarged assumptions of the derivation, the pair h Γ, M i will be well-
typed.

Proposition 47.4. If D is a derivation with undischarged assumptions ϕ1 , . . . , ϕn ,


M is the proof term associated with D and Γ = { x1 : ϕ1 , . . . , xn : ϕn }, then the
result of recovering derivation from M in context Γ is D.

In the other direction, if we first translate a typing pair to natural deduction


and then translate it back, we won’t get the same pair back since the choice of
variables for the undischarged assumptions is underdetermined. For exam-
ple, consider the pair h{ x : ϕ, y : ϕ → ψ}, yx i. The corresponding derivation
is

ϕ→ψ ϕ
ψ
→Elim

By assigning different variables to the undischarged assumptions, say, u to


ϕ → ψ and v to ϕ, we would get the term uv rather than yx. There is a connec-
tion, though: the terms will be the same up to renaming of variables.
Now we have established the correspondence between typing pairs and
natural deduction, we can prove theorems for typing pairs and transfer the
result to natural deduction derivations.
Similar to what we did in the natural deduction section, we can make some
observations here too. Let Γ ` M : ϕ denote that there is a pair ( Γ, M ) wit-
nessing the formula ϕ. Then always Γ ` x : ϕ if x : ϕ ∈ Γ, and the following
rules are valid:

Release: (None) ((None)) 621


CHAPTER 47. PROPOSITIONS AS TYPES

Γ ` M1 : ϕ1 ∆ ` M2 : ϕ2 Γ ` M : ϕ1 ∧ ϕ2
∧Intro ∧Elimi
Γ, ∆ ` h M1 , M2 i : ϕ1 ∧ ϕ2 Γ ` pi ( M ) : ϕ i
Γ ` M1 : ϕ1 Γ ` M2 : ϕ2
ϕ2 ∨Intro1 ϕ ∨Intro2
Γ ` in1 ( M ) : ϕ1 ∨ ϕ2 Γ ` in2 1 ( M ) : ϕ1 ∨ ϕ2
Γ ` M : ϕ∨ψ ∆ 1 , x1 : ϕ1 ` N1 : χ ∆ 2 , x2 : ϕ2 ` N2 : χ
0 ∨Elim
Γ, ∆, ∆ ` case( M, x1 .N1 , x2 .N2 ) : χ
Γ, x : ϕ ` N : ψ Γ`Q:ϕ ∆` P : ϕ→ψ
→Intro →Elim
Γ ` λx ϕ . N : ϕ → ψ Γ, ∆ ` PQ : ψ
Γ`M:⊥
⊥Elim
Γ ` contr ϕ ( M ) : ϕ

These are the typing rules of the simply typed lambda calculus extended
with product, sum and bottom.
In addition, the F ( Γ, M ) is actually a type checking algorithm; it returns
the type of the term with respect to the context, or is undefined if the term is
ill-typed with respect to the context.

47.6 Reduction

In natural deduction derivations, an introduction rule that is followed by an


elimination rule is redundant. For instance, the derivation

ϕ ϕ→ψ
ψ
→Elim [χ]
∧Intro
ψ∧χ
ψ
∧Elim
→Intro
χ→ψ

can be replaced with the simpler derivation:

ϕ ϕ→ψ
ψ
→Elim
→Intro
χ→ψ

As we see, an ∧Intro followed by ∧Elim “cancel out.” In general, we see


that the conclusion of ∧Elim is always the formula on one side of the conjunc-
tion, and the premises of ∧Intro requires both sides of the conjunction, thus if
we need a derivation of either side, we can simply use that derivation without
introducing the conjunction followed by eliminating it.

622 Release: (None) ((None))


47.6. REDUCTION

Thus in general we have

D1 D2
.1 Di
ϕ1 ϕ2
ϕ1 ∧ ϕ2 ∧Intro ϕi
ϕi ∧ Elim i

The .1 symbol has a similar meaning as in the lambda calculus, i.e., a sin-
gle step of a reduction. In the proof term syntax for derivations, the above
reduction rule thus becomes:
ϕ ϕ
( Γ, pi h M1 1 , M2 2 i) .1 ( Γ, Mi )

In the typed lambda calculus, this is the beta reduction rule for the product
type.
Note the type annotation on M1 and M2 : while in the standard term syntax
only λx ϕ . N has such notion, we reuse the notation here to remind us of the
formula the term is associated with in the corresponding natural deduction
derivation, to reveal the correspondence between the two kinds of syntax.
In natural deduction, a pair of inferences such as those on the left, i.e., a
pair that is subject to cancelling is called a cut. In the typed lambda calculus
the term on the left of .1 is called a redex, and the term to the right is called
the reductum. Unlike untyped lambda calculus, where only (λx. N ) Q is con-
sidered to be redex, in the typed lambda calculus the syntax is extended to
ϕ
terms involving h N, M i, pi ( N ), ini ( N ), case( N, x1 .M1 , x2 .M2 ), and contr N (),
with corresponding redexes.
Similarly we have reduction for disjunction:

[ ϕ1 ] u [ ϕ2 ] u D
D
D1 D2 .1 ϕi
ϕi
ϕ1 ∨ ϕ2 ∨Intro χ χ Di
u
χ ∨Elim
χ

This corresponds to a reduction on proof terms:


ϕ ϕ ϕ ϕ
( Γ, case(ini i ( M ϕi ), x1 1 .N1χ , x2 2 .N2χ )) .1 ( Γ, Niχ [ M ϕi /xi i ])

This is the beta reduction rule of for sum types. Here, M[ N/x ] means replac-
ing all assumptions denoted by variable x in M with N,
It would be nice if we pass the context Γ to the substitution function so that
it can check if the substitution makes sense. For example, xy[ ab/y] does not
make sense under the context { x : ϕ → θ, y : ϕ, a : ψ → χ, b : ψ} since then we

Release: (None) ((None)) 623


CHAPTER 47. PROPOSITIONS AS TYPES

would be substituting a derivation of χ where a derivation of ϕ is expected.


However, as long as our usage of substitution is careful enough to avoid such
errors, we won’t have to worry about such conflicts. Thus we can define it
recursively as we did for untyped lambda calculus as if we are dealing with
untyped terms.
Finally, the reduction of the function type corresponds to removal of a de-
tour of a →Intro followd by a →Elim.

[ ϕ]u
D0
D
ϕ
D0 .1
ψ
u →Intro D
ϕ→ψ ϕ
ψ
→Elim
ψ
For proof terms, this amounts to ordinary beta reduction:

( Γ, (λx ϕ . N ψ ) Q ϕ ) .1 ( Γ, N ψ [ Q ϕ /x ϕ ])
Absurdity has only an elimination rule and no introduction rule, thus there
is no such reduction for it.
Note that the above notion of reduction concerns only deductions with a
cut at the end of a derivation. We would of course like to extend it to reduction
of cuts anywhere in a derivation, or reductions of subterms of proof terms
which constitute redexes. Note that, however, the conclusion of the reduction
does not change after reduction, thus we are free to continue applying rules
to both sides of .1 . The resulting pairs of trees constitutes an extended notion
of reduction; it is analogous to compatibility in the untyped lambda calculus.
It’s easy to see that the context Γ does not change during the reduction
(both the original and the extended version), thus it’s unnecessary to men-
tion the context when we are discussing reductions. In what follows we will
assume that every term is accompanied by a context which does no change
during reduction. We then say “proof term” when we mean a proof term ac-
companied by a context which makes it well-typed.
As in lambda calculus, the notion of normal-form term and normal deduc-
tion is given:
Definition 47.5. A proof term with no redex is said to be in normal form; like-
wise, a derivation without cuts is a normal derivation. A proof term is in normal
form if and only if its counterpart derivation is normal.

47.7 Normalization
In this section we prove that, via some reduction order, any deduction can
be reduced to a normal deduction, which is called the normalization property.

624 Release: (None) ((None))


47.7. NORMALIZATION

We will make use of the propositions-as-types correspondence: we show that


every proof term can be reduced to a normal form; normalization for natural
deduction derivations then follows.
Firstly we define some functions that measure the complexity of terms.
The length len( ϕ) of a formulas is defined by

len( p) = 0
len( ϕ ∧ ψ) = len( ϕ) + len(ψ) + 1
len( ϕ ∨ ψ) = len( ϕ) + len(ψ) + 1
len( ϕ → ψ) = len( ϕ) + len(ψ) + 1.

The complexity of a redex M is measured by its cut rank cr( M):

cr((λx ϕ . N ψ ) Q) = len( ϕ) + len(ψ) + 1


cr(pi (h M ϕ , N ψ i)) = len( ϕ) + len(ψ) + 1
ϕ ϕ χ ϕ χ
cr(case(ini i ( M ϕi ), x1 1 .N1 , x2 2 .N2 )) = len( ϕ) + len(ψ) + 1

The complexity of a proof term is measured by the most complex redex in it,
and 0 if it is normal:

mr( M ) = max{cr( N )| N is a sub term of M and is redex}

Lemma 47.6. If M[ N ϕ /x ϕ ] is a redex and M 6≡ x, then one of the following cases


holds:
1. M is itself a redex, or

2. M is of the form pi ( x ), and N is of the form h P1 , P2 i

3. M is of the form case(i, x1 .P1 , x2 .P2 ), and N is of the form ini ( Q)

4. M is of the form xQ, and N is of the form λx. P


In the first case, cr( M [ N/x ]) = cr( M); in the other cases, cr( M [ N/x ]) = len( ϕ)).

Proof. Proof by induction on M.


1. If M is a single variable y and y 6≡ x, then y[ N/x ] is y, hence not a redex.
ϕ
2. If M is of the form h N1 , N2 i, or λx. N, or ini ( N ), then M [ N ϕ /x ϕ ] is also
of that form, and so is not a redex.

3. If M is of the form pi ( P), we consider two cases.

a) If P is of the form h P1 , P2 i, then M ≡ pi (h P1 , P2 i) is a redex, and


clearly
M[ N/x ] ≡ pi (h P1 [ N/x ], P2 [ N/x ]i)
is also a redex. The cut ranks are equal.

Release: (None) ((None)) 625


CHAPTER 47. PROPOSITIONS AS TYPES

b) If P is a single variable, it must be x to make the substitution a


redex, and N must be of the form h P1 , P2 i. Now consider
M [ N/x ] ≡ pi ( x )[h P1 , P2 i/x ],
which is pi (h P1 , P2 i). Its cut rank is equal to cr( x ), which is len( ϕ).
The cases of case( N, x1 .N1 , x2 .N2 ) and PQ are similar.

Lemma 47.7. If M contracts to M0 , and cr( M ) > cr( N ) for all proper redex sub-
terms N of M, then cr( M ) > mr( M0 ).

Proof. Proof by cases.


1. If M is of the form pi (h M1 , M2 i), then M0 is Mi ; since any sub-term of
Mi is also proper sub-term of M, the claim holds.
2. If M is of the form (λx ϕ . N ) Q ϕ , then M0 is N [ Q ϕ /x ϕ ]. Consider a redex
in M0 . Either there is corresponding redex in N with equal cut rank,
which is less than cr( M ) by assumption, or the cut rank equals len( ϕ),
which by definition is less than cr((λx ϕ . N ) Q).
3. If M is of the form
ϕ χ ϕ χ
case(ini ( N ϕi ), x1 1 .N1 , x2 2 .N2 ),

then M0 ≡ Ni [ N/xi i ]. Consider a redex in M0 . Either there is corre-


ϕ

sponding redex in Ni with equal cut rank, which is less than cr( M ) by
assumption; or the cut rank equals len( ϕi ), which by definition is less
ϕ χ ϕ χ
than cr(case(ini ( N ϕi ), x1 1 .N1 , x2 2 .N2 )).

Theorem 47.8. All proof terms reduce to normal form; all derivations reduce to nor-
mal derivations.

Proof. The second follows from the first. We prove the first by complete in-
duction on m = mr( M), where M is a proof term.
1. If m = 0, M is already normal.
2. Otherwise, we proceed by induction on n, the number of redexes in M
with cut rank equal to m.
a) If n = 1, select any redex N such that m = cr( N ) > cr( P) for any
proper sub-term P which is also a redex of course. Such a redex
must exist, since any term only has finitely many subterms.
Let N 0 denote the reductum of N. Now by the lemma mr( N 0 ) <
mr( N ), thus we can see that n, the number of redexes with cr(=)m
is decreased. So m is decreased (by 1 or more), and we can apply
the inductive hypothesis for m.

626 Release: (None) ((None))


47.7. NORMALIZATION

b) For the induction step, assume n > 1. the process is similar, except
that n is only decreased to a positive number and thus m does not
change. We simply apply the induction hypothesis for n.

The normalization of terms is actually not specific to the reduction order


we chose. In fact, one can prove that regardless of the order in which redexes
are reduced, the term always reduces to a normal form. This property is called
strong normalization.

Release: (None) ((None)) 627


Part XI

Counterfactuals

628
Chapter 48

Introduction

48.1 The Material Conditional

In its simplest form in English, a conditional is a sentence of the form “If


. . . then . . . ,” where the . . . are themselves sentences, such as “If the butler
did it, then the gardener is innocent.” In introductory logic courses, we earn to
symbolize conditionals using the → connective: symbolize the parts indicated
by . . . , e.g., by formulas ϕ and ψ, and the entire conditional is symbolized by
ϕ → ψ.
The connective → is truth-functional, i.e., the truth value—T or F—of ϕ →
ψ is determined by the truth values of ϕ and ψ: ϕ → ψ is true iff ϕ is false
or ψ is true, and false otherwise. Relative to a truth value assignment v, we
define v  ϕ → ψ iff v 2 ϕ or v  ψ. The connective → with this semantics is
called the material conditional.
This definition results in a number of elementary logical facts. First of all,
the deduction theorem holds for the material conditional:

If Γ, ϕ  ψ then Γ  ϕ → ψ (48.1)

It is truth-functional: ϕ → ψ and ¬ ϕ ∨ ψ are equivalent:

ϕ → ψ  ¬ϕ ∨ ψ (48.2)
¬ϕ ∨ ψ  ϕ → ψ (48.3)

A material conditional is entailed by its consequent and by the negation of its


antecedent:

ψ  ϕ→ψ (48.4)
¬ϕ  ϕ → ψ (48.5)

629
CHAPTER 48. INTRODUCTION

A false material conditional is equivalent to the conjunction of its antecedent


and the negation of its consequent: if ϕ → ψ is false, ϕ ∧ ¬ψ is true, and vice
versa:

¬( ϕ → ψ)  ϕ ∧ ¬ψ (48.6)
ϕ ∧ ¬ψ  ¬( ϕ → ψ) (48.7)

The material conditional supports modus ponens:

ϕ, ϕ → ψ  ψ (48.8)

The material conditional agglomerates:

ϕ → ψ, ϕ → χ  ϕ → (ψ ∧ χ) (48.9)

We can always strengthen the antecedent, i.e., the conditional is monotonic:

ϕ → ψ  ( ϕ ∧ χ) → ψ (48.10)

The material conditional is transitive, i.e., the chain rule is valid:

ϕ → ψ, ψ → χ  ϕ → χ (48.11)

The material conditional is equivalent to its contrapositive:

ϕ → ψ  ¬ψ → ¬ ϕ (48.12)
¬ψ → ¬ ϕ  ϕ → ψ (48.13)

These are all useful and unproblematic inferences in mathematical rea-


soning. However, the philosophical and linguistic literature is replete with
purported counterexamples to the equivalent inferences in non-mathematical
contexts. These suggest that the material conditional → is not—or at least
not always—the appropriate connective to use when symbolizing English “if
. . . then . . . ” statements.

48.2 Paradoxes of the Material Conditional


One of the first to criticize the use of ϕ → ψ as a way to symbolize “if . . . then
. . . ” statements of English was C. I. Lewis. Lewis was criticizing the use of
the material condition in Whitehead and Russell’s Principia Mathematica, who
pronounced → as “implies.” Lewis rightly complained that if → meant “im-
plies,” then any false proposition p implies that p implies q, since p → ( p → q)
is true if p is false, and that any true proposition q implies that p implies q,
since q → ( p → q) is true if q is true.

630 Release: (None) ((None))


48.3. THE STRICT CONDITIONAL

Logicians of course know that implication, i.e., logical entailment, is not


a connective but a relation between formulas or statements. So we should
just not read → as “implies” to avoid confusion.1 As long as we don’t, the
particular worry that Lewis had simply does not arise: p does not “imply” q
even if we think of p as standing for a false English sentence. To determine
if p  q we must consider all valuations, and p 2 q even when we use p to
symbolize a sentence which happens to be false.
But there is still something odd about “if . . . then. . . ” statements such as
Lewis’s

If the moon is made of green cheese, then 2 + 2 = 4.

and about the inferences

The moon is not made of green cheese. Therefore, if the moon is


made of green cheese, then 2 + 2 = 4.

2 + 2 = 4. Therefore, if the moon is made of green cheese, then


2 + 2 = 4.

Yet, if “if . . . then . . . ” were just →, the sentence would be unproblematically


true, and the inferences unproblematically valid.
Another example of concerns the tautology ( ϕ → ψ) ∨ (ψ → ϕ). This would
suggest that if you take two indicative sentences S and T from the newspaper
at random, the sentence “If S then T, or if T then S” should be true.

48.3 The Strict Conditional

Lewis introduced the strict conditional J and argued that it, not the material
conditional, corresponds to implication. In alethic modal logic, ϕ J ψ can
be defined as ( ϕ → ψ). A strict conditional is thus true (at a world) iff the
corresponding material conditional is necessary.
How does the strict conditional fare vis-a-vis the paradoxes of the material
conditional? A strict conditional with a false antecedent and one with a true
consequent, may be true, or it may be false. Moreover, ( ϕ J ψ) ∨ (ψ J ϕ) is
not valid. The strict conditional ϕ J ψ is also not equivalent to ¬ ϕ ∨ ψ, so it is
not truth functional.

1 Reading “→” as “implies” is still widely practised by mathematicians and computer scien-

tists, although philosophers try to avoid the confusions Lewis highlighted by pronouncing it as
“only if.”

Release: (None) ((None)) 631


CHAPTER 48. INTRODUCTION

We have:
ϕ J ψ  ¬ ϕ ∨ ψ but: (48.14)
¬ϕ ∨ ψ 2 ϕ J ψ (48.15)
ψ2 ϕJψ (48.16)
¬ϕ 2 ϕ J ψ (48.17)
¬( ϕ → ψ) 2 ϕ ∧ ¬ψ but: (48.18)
ϕ ∧ ¬ψ  ¬( ϕ J ψ) (48.19)

However, the strict conditional still supports modus ponens:

ϕ, ϕ J ψ  ψ (48.20)

The strict conditional agglomerates:

ϕ J ψ, ϕ J χ  ϕ J (ψ ∧ χ) (48.21)

Antecedent strengthening holds for the strict conditional:

ϕ J ψ  ( ϕ ∧ χ) J ψ (48.22)

The strict conditional is also transitive:

ϕ J ψ, ψ J χ  ϕ J χ (48.23)

Finally, the strict conditional is equivalent to its contrapositive:

ϕ J ψ  ¬ψ J ¬ ϕ (48.24)
¬ψ J ¬ ϕ  ϕ J ψ (48.25)
However, the strict conditional still has its own “paradoxes.” Just as a ma-
terial conditional with a false antecedent or a true consequent is true, a strict
conditional with a necessarily false antecedent or a necessarily true consequent
is true. Moreover, any true strict conditional is necessarily true, and any false
strict conditional is necessarily false. In other words, we have
ϕ  ϕ J ψ (48.26)
¬ ψ  ϕ J ψ (48.27)
ϕ J ψ  ( ϕ J ψ ) (48.28)
¬( ϕ J ψ)  ¬( ϕ J ψ) (48.29)
These are not problems if you think of J as “implies.” Logical entailment rela-
tionships are, after all, mathematical facts and so can’t be contingent. But they
do raise issues if you want to use J as a logical connective that is supposed to
capture “if . . . then . . . ,” especially the last two. For surely there are “if . . . then
. . . ” statements that are contingently true or contingently false—in fact, they
generally are neither necessary nor impossible.

632 Release: (None) ((None))


48.4. COUNTERFACTUALS

48.4 Counterfactuals
A very common and important form of “if . . . then . . . ” constructions in En-
glish are built using the past subjunctive form of to be: “if it were the case that
. . . then it would be the case that . . . ” Because usually the antecedent of such
a conditional is false, i.e., counter to fact, they are called counterfactual con-
ditionals (and because they use the subjunctive form of to be, also subjunctive
conditionals. They are distinguished from indicative conditionals which take
the form of “if it is the case that . . . then it is the case that . . . ” Counterfac-
tual and indicative conditionals differ in truth conditions. Consider Adams’s
famous example:

If Oswald didn’t kill Kennedy, someone else did.


If Oswald hadn’t killed Kennedy, someone else would have.

The first is indicative, the second counterfactual. The first is clearly true: we
know JFK was killed by someone, and if that someone wasn’t (contrary to the
Warren Report) Lee Harvey Oswald, then someone else killed JFK. The second
one says something different. It claims that if Oswald hadn’t killed Kennedy,
i.e., if the Dallas shooting had been avoided or had been unsuccessful, history
would have subsequently unfolded in such a way that another assassination
would have been successful. In order for it to be true, it would have to be the
case that powerful forces had conspired to ensure JFK’s death (as many JFK
conspiracy theorists believe).
It is a live debate whether the indicative conditional is correctly captured
by the material conditional, in particular, whether the paradoxes of the ma-
terial conditional can be “explained” in a way that is compatible with it giv-
ing the truth conditions for English indicative conditionals. By contrast, it
is uncontroversial that counterfactual conditionals cannot be symbolized cor-
rectly by the material conditionals. That is clear because, even though gener-
ally the antecedents of counterfactuals are false, not all counterfactuals with
false antecedents are true—for instance, if you believe the Warren Report, and
there was no conspiracy to assassinate JFK, then Adams’s counterfactual con-
ditional is an example.
Counterfactual conditionals play an important role in causal reasoning: a
prime example of the use of counterfactuals is to express causal relationships.
E.g., striking a match causes it to light, and you can express this by saying
“if this match were struck, it would light.” Material, and generally indicative
conditionals, cannot be used to express this: “the match is struck → the match
lights” is true if the match is never struck, regardless of what would happen
if it were. Even worse, “the match is struck → the match turns into a bouquet
of flowers” is also true if it is never struck, but the match would certainly not
turn into a bouquet of flowers if it were struck.

Release: (None) ((None)) 633


CHAPTER 48. INTRODUCTION

It is still debated What exactly the correct logic of counterfactuals is. An


influential analysis of counterfactuals was given by Stalnaker and Lewis. Ac-
cording to them, a counterfactual “if it were the case that S then it would be
the case that T” is true iff T is true in the counterfactual situation (“possible
world”) that is closest to the way the actual world is and where S is true. This
is called an “ontic” analysis, since it makes reference to an ontology of possi-
ble worlds. Other analyses make use of conditional probabilities or theories
of belief revision. There is a proliferation of different proposed logics of coun-
terfactuals. There isn’t even a single Lewis-Stalnaker logic of counterfactuals:
even though Stalnaker and Lewis proposed accounts along similar lines with
reference to closest possible worlds, the assumptions they made result in dif-
ferent valid inferences.

Problems
Problem 48.1. Give S5-counterexamples to the entailment relations which do
not hold for the strict conditional, i.e., for:

1. ¬ p 2 ( p → q)

2. q 2 ( p → q)

3. ¬( p → q) 2 p ∧ ¬q

4. 2 ( p → q) ∨ (q → p)

Problem 48.2. Show that the valid entailment relations hold for the strict con-
ditional by giving S5-proofs of:

1. ( ϕ → ψ)  ¬ ϕ ∨ ψ

2. ϕ ∧ ¬ψ  ¬( ϕ → ψ)

3. ϕ, ( ϕ → ψ)  ψ

4. ( ϕ → ψ), ( ϕ → χ)  ( ϕ → (ψ ∧ χ))

5. ( ϕ → ψ)  (( ϕ ∧ χ) → ψ)

6. ( ϕ → ψ), (ψ → χ)  ( ϕ → χ)

7. ( ϕ → ψ)  (¬ψ → ¬ ϕ)

8. (¬ψ → ¬ ϕ)  ( ϕ → ψ)

Problem 48.3. Give proofs in S5 of:

1. ¬ψ  ϕ J ψ

2. ϕ J ψ  ( ϕ J ψ)

634 Release: (None) ((None))


48.4. COUNTERFACTUALS

3. ¬( ϕ J ψ)  ¬( ϕ J ψ)

Use the definition of J to do so.

Release: (None) ((None)) 635


Chapter 49

Minimal Change Semantics

49.1 Introduction
Stalnaker and Lewis proposed accounts of counterfactual conditionals such
as “If the match were struck, it would light.” Their accounts were propos-
als for how to properly understand the truth conditions for such sentences.
The idea behind both proposals is this: to evaluate whether a counterfactual
conditional is true, we have to consider those possible worlds which are min-
imally different from the way the world actually is to make the antecedent
true. If the consequent is true in these possible worlds, then the counterfac-
tual is true. For instance, suppose I hold a match and a matchbook in my
hand. In the actual world I only look at them and ponder what would hap-
pen if I were to strike the match. The minimal change from the actual world
where I strike the match is that where I decide to act and strike the match. It
is minimal in that nothing else changes: I don’t also jump in the air, striking
the match doesn’t also light my hair on fire, I don’t suddenly lose all strength
in my fingers, I am not simultaneously doused with water in a SuperSoaker
ambush, etc. In that alternative possibility, the match lights. Hence, it’s true
that if I were to strike the match, it would light.
This intuitive account can be paired with formal semantics for logics of
counterfactuals. Lewis introduced the symbol “€” for the counterfactual
while Stalnaker used the symbol “>”. We’ll use €, and add it as a binary
connective to propositional logic. So, we have, in addition to formulas of the
form ϕ → ψ also formulas of the form ϕ € ψ. The formal semantics, like the
relational semantics for modal logic, is based on models in which formulas are
evaluated at worlds, and the satisfaction condition defining M, w ϕ € ψ is
given in terms of M, w0 ϕ and M, w0 ψ for some (other) worlds w0 . Which
w0 ? Intuitively, the one(s) closest to w for which it holds that M, w0 ϕ. This
requires that a relation of “closeness” has to be included in the model as well.
Lewis introduced an instructive way of representing counterfactual situa-
tions graphically. Each possible world is at the center of a set of nested spheres

636
49.2. SPHERE MODELS

containing other worlds—we draw these spheres as concentric circles. The


worlds between two spheres are equally close to the world at the center as
each other, those contained in a nested sphere are closer, and those in a sur-
rounding sphere further away.

w ϕ

The closest ϕ-worlds are those worlds w0 where ϕ is satisfied which lie in the
smallest sphere around the center world w (the gray area). Intuitively, ϕ € ψ
is satisfied at w if ψ is true at all closest ϕ-worlds.

49.2 Sphere Models


One way of providing a formal semantics for counterfactuals is to turn Lewis’s
informal account into a mathematical structure. The spheres around a world w
then are sets of worlds. Since the spheres are nested, the sets of worlds around w
have to be linearly ordered by the subset relation.

Definition 49.1. A sphere model is a triple M = hW, O, V i where W is a non-


empty set of worlds, V : At0 → ℘(W ) is a valuation, and O : W → ℘(℘(W ))
assigns to each world w a system of spheres Ow . For each w, Ow is a set of sets
of worlds, and must satisfy:

1. Ow is centered on w: {w} ∈ Ow .

2. Ow is nested: whenever S1 , S2 ∈ Ow , S1 ⊆ S2 or S2 ⊆ S1 , i.e., Ow is


linearly ordered by ⊆.

3. Ow is closed under non-empty unions.

4. Ow is closed under non-empty intersections.

The intuition behind Ow is that the worlds “around” w are stratified ac-
cording to how far away they are from w. The innermost sphere is just w by
itself, i.e., the set {w}: w is closer to w than the worlds in any other sphere. If
S ( S0 , then the worlds in S0 \ S are further way from w than the worlds in S:
S0 \ S is the “layer” between the S and the worlds outside of S0 . In particular,
we have to think of the spheres as containing all the worlds within their outer
surface; they are not just the individual layers.

Release: (None) ((None)) 637


CHAPTER 49. MINIMAL CHANGE SEMANTICS

w1 w7

w5
w p
w6
w2
w3

w4

Figure 49.1: Diagram of a sphere model

The diagram in ?? corresponds to the sphere model with W = {w, w1 , . . . , w7 },


V ( p) = {w5 , w6 , w7 }. The innermost sphere S1 = {w}. The closest worlds to
w are w1 , w2 , w3 , so the next larger sphere is S2 = {w, w1 , w2 , w3 }. The worlds
further out are w4 , w5 , w6 , so the outermost sphere is S3 = {w, w1 , . . . , w6 }.
The system of spheres around w is Ow = {S1 , S2 , S3 }. The world w7 is not in
any sphere around w. The closest worlds in which p is true are w5 and w6 , and
so the smallest p-admitting sphere is S3 .
To define satisfaction of a formula ϕ at world w in a sphere model M,
M, w ϕ, we expand the definition for modal formulas to include a clause
for ψ € χ:

Definition 49.2. M, w ψ € χ iff either

1. For all u ∈
S
Ow , M, u 1 χ, or

2. For some S ∈ Ow ,

a) M, u ψ for some u ∈ S, and


b) for all v ∈ S, either M, v 1 ψ or M, v χ.

According to this definition, M, w ψ € χ iff either the antecedent ψ


is false everywhere in the spheres around w, or there is a sphere S where ψ
is true, and the material conditional ψ → χ is true at all worlds in that “ψ-
admitting” sphere. Note that we didn’t require in the definition that S is the
innermost ψ-admitting sphere, contrary to what one might expect from the
intuitive explanation. But if the condition in ?? is satisfied for some sphere S,
then it is also satisfied for all spheres S contains, and hence in particular for
the innermost sphere.
Note also that the definition of sphere models does not require that there
is an innermost ψ-admitting sphere: we may have an infinite sequence S1 )

638 Release: (None) ((None))


49.3. TRUTH AND FALSITY OF COUNTERFACTUALS

w ϕ

Figure 49.2: Non-vacuously true counterfactual

w ϕ

Figure 49.3: Vacuously true counterfactual

S2 ) · · · ) {w} of ψ-admitting spheres, and hence no innermost ψ-admitting


spheres. In that case, M, w ψ € χ iff ψ → χ holds throughout the spheres
Si , Si+1 , . . . , for some i.

49.3 Truth and Falsity of Counterfactuals


A counterfactual ϕ € ψ is (non-vacuously) true if the closest ϕ-worlds are all
ψ-worlds, as depicted in ??. A counterfactual is also true at w if the system of
spheres around w has no ϕ-admitting spheres at all. In that case it is vacuously
true (see ??).
It can be false in two ways. One way is if the closest ϕ-worlds are not all
ψ-worlds, but some of them are. In this case, ϕ € ¬ψ is also false (see ??). If
the closest ϕ-worlds do not overlap with the ψ-worlds at all, then ϕ € ψ. But,
in this case all the closest ϕ-worlds are ¬ψ-worlds, and so ϕ € ¬ψ is true (see
??).
In contrast to the strict conditional, counterfactuals may be contingent.
Consider the sphere model in ??. The ϕ-worlds closest to u are all ψ-worlds,
so M, u ϕ € ψ. But there are ϕ-worlds closest to v which are not ψ-worlds,

Release: (None) ((None)) 639


CHAPTER 49. MINIMAL CHANGE SEMANTICS

w ϕ

Figure 49.4: False counterfactual, false opposite

w ϕ

Figure 49.5: False counterfactual, true opposite

so M, v 1 ϕ € ψ.

49.4 Antecedent Strengthenng


“Strengthening the antecedent” refers to the inference ϕ → χ  ( ϕ ∧ ψ) → χ. It
is valid for the material conditional, but invalid for counterfactuals. Suppose
it is true that if I were to strike this match, it would light. (That means, there is
nothing wrong with the match or the matchbook surface, I will not break the
match, etc.) But it is not true that if I were to light this match in outer space, it
would light. So the following inference is invalid:

I the match were struck, it would light.


Therefore, if the match were struck in outer space, it would light.

The Lewis-Stalnaker account of conditionals explains this: the closest world


where I light the match and I do so in outer space is much further removed
from the actual world than the closest world where I light the match is. So
although it’s true that the match lights in the latter, it is not in the former. And
that is as it schould be.

640 Release: (None) ((None))


49.5. TRANSITIVITY

u v

Figure 49.6: Contingent counterfactual

w1

w
w2
q

Figure 49.7: Counterexample to antecedent strengthening

Example 49.3. The sphere semantics invalidates the inference, i.e., we have
p € r 2 ( p ∧ q) € r. Consider the model M = hW, O, V i where W =
{w, w1 , w2 }, Ow = {{w}, {w, w1 }, {w, w1 , w2 }}, V ( p) = {w1 , w2 }, V (q) =
{w2 }, and V (r ) = {w1 }. There is a p-admitting sphere S = {w, w1 } and p → r
is true at all worlds in it, so M, w p € r. There is also a ( p ∧ q)-admitting
sphere S0 = {w, w1 , w2 } but M, w2 1 ( p ∧ q) → r, so M, w 1 ( p ∧ q) € r (see
??).

49.5 Transitivity
For the material conditional, the chain rule holds: ϕ → ψ, ψ → χ  ϕ → χ.
In other words, the material conditional is transitive. Is the same true for
counterfactuals? Consider the following example due to Stalnaker.

Release: (None) ((None)) 641


CHAPTER 49. MINIMAL CHANGE SEMANTICS

If J. Edgar Hoover had been born a Russian, he would have been a


Communist.
If J. Edgar Hoover were a Communist, he would have been be a
traitor.
Therefore, If J. Edgar Hoover had been born a Russian, he would
have been be a traitor.

If Hoover had been born (at the same time he actually did), not in the United
States, but in Russia, he would have grown up in the Soviet Union and become
a Communist (let’s assume). So the first premise is true. Likewise, the second
premise, considered in isolation is true. The conclusion, however, is false:
in all likelihood, Hoover would have been a fervent Communist if he had
been born in the USSR, and not been a traitor (to his country). The intuitive
assignment of truth values is borne out by the Stalnaker-Lewis account. The
closest possible world to ours with the only change being Hoover’s place of
birth is the one where Hoover grows up to be a good citizen of the USSR.
This is the closest possible world where the antecedent of the first premise
and of the conclusion is true, and in that world Hoover is a loyal member of
the Communist party, and so not a traitor. To evaluate the second premise, we
have to look at a different world, however: the closest world where Hoover is
a Communist, which is one where he was born in the United States, turned,
and thus became a traitor.1

Example 49.4. The sphere semantics invalidates the inference, i.e., we have
p € q, q € r 2 p € r. Consider the model M = hW, O, V i where W =
{w, w1 , w2 }, Ow = {{w}, {w, w1 }, {w, w1 , w2 }}, V ( p) = {w2 }, V (q) = {w1 , w2 },
and V (r ) = {w1 }. There is a p-admitting sphere S = {w, w1 , w2 } and q → q is
true at all worlds in it, so M, w p € q. There is also a q-admitting sphere
S0 = {w, w1 } and M 1 q → r is true at all worlds in it, so M, w q € r. How-
ever, the p-admitting sphere {w, w1 , w2 } contains a world, namely w2 , where
M, w2 1 p → r.

49.6 Contraposition
Material and strict conditionals are equivalent to their contrapositives. Coun-
terfactuals are not. Here is an example due to Kratzer:

If Goethe hadn’t died in 1832, he would (still) be dead now.


If Goethe weren’t dead now, he would have died in 1832.
1 Of course, to appreciate the force of the example we have to take on board some metaphysi-

cal and political assumptions, e.g., that it is possible that Hoover could have been born to Russian
parents, or that Communists in the US of the 1950s were traitors to their country.

642 Release: (None) ((None))


49.6. CONTRAPOSITION

¬q
q

w w1

w2

p
¬p

Figure 49.8: Counterexample to contraposition

The first sentence is true: humans don’t live hundreds of years. The second
is clearly false: if Goethe weren’t dead now, he would be still alive, and so
couldn’t have died in 1832.

Example 49.5. The sphere semantics invalidates contraposition, i.e., we have


p € q 2 ¬q € ¬ p. Think of p as “Goethe didn’t die in 1832” and q as
“Goethe is dead now.” We can capture this in a model M1 = hW, O, V i with
W = {w, w1 , w2 }, O = {{w}, {w, w1 }, {w, w1 , w2 }}, V ( p) = {w1 , w2 } and
V (q) = {w, w1 }. So w is the actual world where Goethe died in 1832 and is still
dead; w1 is the (close) world where Goethe died in, say, 1833, and is still dead;
and w2 is a (remote) world where Goethe is still alive. There is a p-admitting
sphere S = {w, w1 } and p → q is true at all worlds in it, so M, w p € q.
However, the ¬q-admitting sphere {w, w1 , w2 } contains a world, namely w2 ,
where q is false and p is true, so M, w2 1 ¬q → ¬ p.

Problems
Problem 49.1. Find a convincing, intuitive example for the failure of transi-
tivity of counterfactuals.

Problem 49.2. Draw the sphere diagram corresponding to the counterexam-


ple in ??.

Problem 49.3. In ??, world w2 is where Hoover is born in Russia, is a com-


munist, and not a traitor, and w1 is the world where Hoover is born in the
US, is a communist, and a traitor. In this model, w1 is closer to w than w2 is.
Is this necessary? Can you give a counterexample that does not assume that

Release: (None) ((None)) 643


CHAPTER 49. MINIMAL CHANGE SEMANTICS

Hoover’s being born in Russia is a more remote possibility than him being a
Communist?

644 Release: (None) ((None))


Part XII

Methods

645
CHAPTER 49. MINIMAL CHANGE SEMANTICS

This part covers general and methodological material, especially ex-


planations of various proof methods a non-methematics student may be
unfamiliar with. It currently contains a chapter on how to write proofs,
and a chapter on induction, but additional sections for thos, exercises, and
a chapter on mathematical terminology is also planned.

646 Release: (None) ((None))


Chapter 50

Proofs

50.1 Introduction
Based on your experiences in introductory logic, you might be comfortable
with a proof system—probably a natural deduction or Fitch style proof sys-
tem, or perhaps a proof-tree system. You probably remember doing proofs
in these systems, either proving a formula or show that a given argument is
valid. In order to do this, you applied the rules of the system until you got
the desired end result. In reasoning about logic, we also prove things, but in
most cases we are not using a proof system. In fact, most of the proofs we
consider are done in English (perhaps, with some symbolic language thrown
in) rather than entirely in the language of first-order logic. When constructing
such proofs, you might at first be at a loss—how do I prove something without
a proof system? How do I start? How do I know if my proof is correct?
Before attempting a proof, it’s important to know what a proof is and how
to construct one. As implied by the name, a proof is meant to show that some-
thing is true. You might think of this in terms of a dialogue—someone asks
you if something is true, say, if every prime other than two is an odd number.
To answer “yes” is not enough; they might want to know why. In this case,
you’d give them a proof.
In everyday discourse, it might be enough to gesture at an answer, or give
an incomplete answer. In logic and mathematics, however, we want rigorous
proof—we want to show that something is true beyond any doubt. This means
that every step in our proof must be justified, and the justification must be
cogent (i.e., the assumption you’re using is actually assumed in the statement
of the theorem you’re proving, the definitions you apply must be correctly
applied, the justifications appealed to must be correct inferences, etc.).
Usually, we’re proving some statement. We call the statements we’re prov-
ing by various names: propositions, theorems, lemmas, or corollaries. A
proposition is a basic proof-worthy statement: important enough to record,
but perhaps not particularly deep nor applied often. A theorem is a signifi-

647
CHAPTER 50. PROOFS

cant, important proposition. Its proof often is broken into several steps, and
sometimes it is named after the person who first proved it (e.g., Cantor’s The-
orem, the Löwenheim-Skolem theorem) or after the fact it concerns (e.g., the
completeness theorem). A lemma is a proposition or theorem that is used to
in the proof of a more important result. Confusingly, sometimes lemmas are
important results in themselves, and also named after the person who intro-
duced them (e.g., Zorn’s Lemma). A corollary is a result that easily follows
from another one.
A statement to be proved often contains some assumption that clarifies
about which kinds of things we’re proving something. It might begin with
“Let ϕ be a formula of the form ψ → χ” or “Suppose Γ ` ϕ” or something
of the sort. These are hypotheses of the proposition, theorem, or lemma, and
you may assume these to be true in your proof. They restrict what we’re
proving about, and also introduce some names for the objects we’re talking
about. For instance, if your proposition begins with “Let ϕ be a formula of the
form ψ → χ,” you’re proving something about all formulas of a certain sort
only (namely, conditionals), and it’s understood that ψ → χ is an arbitrary
conditional that your proof will talk about.

50.2 Starting a Proof


But where do you even start?
You’ve been given something to prove, so this should be the last thing that
is mentioned in the proof (you can, obviously, announce that you’re going to
prove it at the beginning, but you don’t want to use it as an assumption). Write
what you are trying to prove at the bottom of a fresh sheet of paper—this way
you don’t lose sight of your goal.
Next, you may have some assumptions that you are able to use (this will
be made clearer when we talk about the type of proof you are doing in the next
section). Write these at the top of the page and make sure to flag that they are
assumptions (i.e., if you are assuming x, write “assume that x,” or “suppose
that x”). Finally, there might be some definitions in the question that you
need to know. You might be told to use a specific definition, or there might
be various definitions in the assumptions or conclusion that you are working
towards. Write these down and ensure that you understand what they mean.
How you set up your proof will also be dependent upon the form of the
question. The next section provides details on how to set up your proof based
on the type of sentence.

50.3 Using Definitions


We mentioned that you must be familiar with all definitions that may be used
in the proof, and that you can properly apply them. This is a really impor-

648 Release: (None) ((None))


50.3. USING DEFINITIONS

tant point, and it is worth looking at in a bit more detail. Definitions are used
to abbreviate properties and relations so we can talk about them more suc-
cinctly. The introduced abbreviation is called the definiendum, and what it
abbreviates is the definiens. In proofs, we often have to go back to how the
definiendum was introduced, because we have to exploit the logical structure
of the definiens (the long version of which the defined term is the abbrevia-
tion) to get through our proof. By unpacking definitions, you’re ensuring that
you’re getting to the heart of where the logical action is.
We’ll start with an example. Suppose you want to prove the following:

Proposition 50.1. For any sets X and Y, X ∪ Y = Y ∪ X.

In order to even start the proof, we need to know what it means for two sets
to be identical; i.e., we need to know what the “=” in that equation means for
sets. Sets are defined to be identical whenever they have the same elements.
So the definition we have to unpack is:

Definition 50.2. Sets X and Y are identical, X = Y, iff every element of X is


an element of Y, and vice versa.

This definition uses X and Y as placeholders for arbitrary sets. What it


defines—the definiendum—is the expression “X = Y” by giving the condition
under which X = Y is true. This condition—“every element of X is an element
of Y, and vice versa”—is the definiens.1 The definition specifies that X = Y is
true if, and only if (we abbreviate this to “iff”) the condition holds.
When you apply the definition, you have to match the X and Y in the
definition to the case you’re dealing with. In our case, it means that in order
for X ∪ Y = Y ∪ X to be true, each z ∈ X ∪ Y must also be in Y ∪ X, and
vice versa. The expression X ∪ Y in the proposition plays the role of X in the
definition, and Y ∪ X that of Y. Since X and Y are used both in the definition
and in the statement of the proposition we’re proving, but in different uses,
you have to be careful to make sure you don’t mix up the two. For instance, it
would be a mistake to think that you could prove the proposition by showing
that every element of X is an element of Y, and vice versa—that would show
that X = Y, not that X ∪ Y = Y ∪ X. (Also, since X and Y may be any two
sets, you won’t get very far, because if nothing is assumed about X and Y they
may well be different sets.)
Within the proof we are dealing with set-theoretic notions such as union,
and so we must also know the meanings of the symbol ∪ in order to under-
stand how the proof should proceed. And sometimes, unpacking the defini-
tion gives rise to further definitions to unpack. For instance, X ∪ Y is defined
as {z : z ∈ X or z ∈ Y }. So if you want to prove that x ∈ X ∪ Y, unpacking
1 In this particular case—and very confusingly!—when X = Y, the sets X and Y are just one

and the same set, even though we use different letters for it on the left and the right side. But the
ways in which that set is picked out may be different, and that makes the definition non-trivial.

Release: (None) ((None)) 649


CHAPTER 50. PROOFS

the definition of ∪ tells you that you have to prove x ∈ {z : z ∈ X or z ∈ Y }.


Now you also have to remember that x ∈ {z : . . . z . . .} iff . . . x . . . . So, further
unpacking the definition of the {z : . . . z . . .} notation, what you have to show
is: x ∈ X or x ∈ Y. So, “every element of X ∪ Y is also an element of Y ∪ X”
really means: “for every x, if x ∈ X or x ∈ Y, then x ∈ Y or x ∈ X.” If we fully
unpack the definitions in the proposition, we see that what we have to show
is this:

Proposition 50.3. For any sets X and Y: (a) for every x, if x ∈ X or x ∈ Y, then
x ∈ Y or x ∈ X, and (b) for every x, if x ∈ Y or x ∈ X, then x ∈ X or x ∈ Y.

What’s important is that unpacking definitions is a necessary part of con-


structing a proof. Properly doing it is sometimes difficult: you must be careful
to distinguish and match the variables in the definition and the terms in the
claim you’re proving. In order to be successful, you must know what the
question is asking and what all the terms used in the question mean—you
will often need to unpack more than one definition. In simple proofs such as
the ones below, the solution follows almost immediately from the definitions
themselves. Of course, it won’t always be this simple.

50.4 Inference Patterns


Proofs are composed of individual inferences. When we make an inference,
we typically indicate that by using a word like “so,” “thus,” or “therefore.”
The inference often relies on one or two facts we already have available in our
proof—it may be something we have assumed, or something that we’ve con-
cluded by an inference already. To be clear, we may label these things, and in
the inference we indicate what other statements we’re using in the inference.
An inference will often also contain an explanation of why our new conclusion
follows from the things that come before it. There are some common patterns
of inference that are used very often in proofs; we’ll go through some below.
Some patterns of inference, like proofs by induction, are more involved (and
will be discussed later).
We’ve already discussed one pattern of inference: unpacking, or applying,
a definition. When we unpack a definition, we just restate something that
involves the definiendum by using the definiens. For instance, suppose that
we have already established in the course of a proof that U = V (a). Then we
may apply the definition of = for sets and infer: “Thus, by definition from (a),
every element of U is an element of V and vice versa.”
Somewhat confusingly, we often do not write the justification of an in-
ference when we actually make it, but before. Suppose we haven’t already
proved that U = V, but we want to. If U = V is the conclusion we aim for,
then we can restate this aim also by applying the definition: to prove U = V
we have to prove that every element of U is an element of V and vice versa. So

650 Release: (None) ((None))


50.4. INFERENCE PATTERNS

our proof will have the form: (a) prove that every element of U is an element
of V; (b) every element of V is an element of U; (c) therefore, from (a) and (b)
by definition of =, U = V. But we would usually not write it this way. Instead
we might write something like,

We want to show U = V. By definition of =, this amounts to


showing that every element of U is an element of V and vice versa.
(a) . . . (a proof that every element of U is an element of V) . . .
(b) . . . (a proof that every element of V is an element of U) . . .

Using a Conjunction
Perhaps the simplest inference pattern is that of drawing as conclusion one of
the conjuncts of a conjunction. In other words: if we have assumed or already
proved that p and q, then we’re entitled to infer that p (and also that q). This is
such a basic inference that it is often not mentioned. For instance, once we’ve
unpacked the definition of U = V we’ve established that every element of U is
an element of V and vice versa. From this we can conclude that every element
of V is an element of U (that’s the “vice versa” part).

Proving a Conjunction
Sometimes what you’ll be asked to prove will have the form of a conjunc-
tion; you will be asked to “prove p and q.” In this case, you simply have
to do two things: prove p, and then prove q. You could divide your proof
into two sections, and for clarity, label them. When you’re making your first
notes, you might write “(1) Prove p” at the top of the page, and “(2) Prove q”
in the middle of the page. (Of course, you might not be explicitly asked to
prove a conjunction but find that your proof requires that you prove a con-
junction. For instance, if you’re asked to prove that U = V you will find that,
after unpacking the definition of =, you have to prove: every element of U is
an element of V and every element of V is an element of U).

Proving a Disjunction
When what you are proving takes the form of a disjunction (i.e., it is an state-
ment of the form “p or q”), it is enough to show that one of the disjuncts is true.
However, it basically never happens that either disjunct just follows from the
assumptions of your theorem. More often, the assumptions of your theorem
are themselves disjunctive, or you’re showing that all things of a certain kind
have one of two properties, but some of the things have the one and others
have the other property. This is where proof by cases is useful (see below).

Release: (None) ((None)) 651


CHAPTER 50. PROOFS

Conditional Proof
Many theorems you will encounter are in conditional form (i.e., show that if
p holds, then q is also true). These cases are nice and easy to set up—simply
assume the antecedent of the conditional (in this case, p) and prove the con-
clusion q from it. So if your theorem reads, “If p then q,” you start your proof
with “assume p” and at the end you should have proved q.
Conditionals may be stated in different ways. So instead of “If p then q,”
a theorem may state that “p only if q,” “q if p,” or “q, provided p.” These all
mean the same and require assuming p and proving q from that assumption.
Recall that a biconditional (“p if and only if (iff) q”) is really two conditionals
put together: if p then q, and if q then p. All you have to do, then, is two
instances of conditional proof: one for the first conditional and another one
for the second. Sometimes, however, it is possible to prove an “iff” statement
by chaining together a bunch of other “iff” statements so that you start with
“p” an end with “q”—but in that case you have to make sure that each step
really is an “iff.”

Universal Claims
Using a universal claim is simple: if something is true for anything, it’s true
for each particular thing. So if, say, the hypothesis of your proof is X ⊆ Y, that
means (unpacking the definition of ⊆), that, for every x ∈ X, x ∈ Y. Thus, if
you already know that z ∈ X, you can conclude z ∈ Y.
Proving a universal claim may seem a little bit tricky. Usually these state-
ments take the following form: “If x has P, then it has Q” or “All Ps are Qs.”
Of course, it might not fit this form perfectly, and it takes a bit of practice to
figure out what you’re asked to prove exactly. But: we often have to prove
that all objects with some property have a certain other property.
The way to prove a universal claim is to introduce names or variables, for
the things that have the one property and then show that they also have the
other property. We might put this by saying that to prove something for all Ps
you have to prove it for an arbitrary P. And the name introduced is a name
for an arbitrary P. We typically use single letters as these names for arbitrary
things, and the letters usually follow conventions: e.g., we use n for natural
numbers, ϕ for formulas, X for sets, f for functions, etc.
The trick is to maintain generality throughout the proof. You start by as-
suming that an arbitrary object (“x”) has the property P, and show (based only
on definitions or what you are allowed to assume) that x has the property Q.
Because you have not stipulated what x is specifically, other that it has the
property P, then you can assert that all every P has the property Q. In short,
x is a stand-in for all things with property P.

Proposition 50.4. For all sets X and Y, X ⊆ X ∪ Y.

652 Release: (None) ((None))


50.4. INFERENCE PATTERNS

Proof. Let X and Y be arbitrary sets. We want to show that X ⊆ X ∪ Y. By


definition of ⊆, this amounts to: for every x, if x ∈ X then x ∈ X ∪ Y. So let
x ∈ X be an arbitrary element of X. We have to show that x ∈ X ∪ Y. Since
x ∈ X, x ∈ X or x ∈ Y. Thus, x ∈ { x : x ∈ X ∨ x ∈ Y }. But that, by definition
of ∪, means x ∈ X ∪ Y.

Proof by Cases
Suppose you have a disjunction as an assumption or as an already established
conclusion—you have assumed or proved that p or q is true. You want to
prove r. You do this in two steps: first you assume that p is true, and prove r,
then you assume that q is true and prove r again. This works because we
assume or know that one of the two alternatives holds. The two steps establish
that either one is sufficient for the truth of r. (If both are true, we have not one
but two reasons for why r is true. It is not necessary to separately prove that
r is true assuming both p and q.) To indicate what we’re doing, we announce
that we “distinguish cases.” For instance, suppose we know that x ∈ Y ∪ Z.
Y ∪ Z is defined as { x : x ∈ Y or x ∈ Z }. In other words, by definition, x ∈ Y
or x ∈ Z. We would prove that x ∈ X from this by first assuming that x ∈ Y,
and proving x ∈ X from this assumption, and then assume x ∈ Z, and again
prove x ∈ X from this. You would write “We distinguish cases” under the
assumption, then “Case (1): x ∈ Y” underneath, and “Case (2): x ∈ Z halfway
down the page. Then you’d proceed to fill in the top half and the bottom half
of the page.
Proof by cases is especially useful if what you’re proving is itself disjunc-
tive. Here’s a simple example:

Proposition 50.5. Suppose Y ⊆ U and Z ⊆ V. Then Y ∪ Z ⊆ U ∪ V.

Proof. Assume (a) that Y ⊆ U and (b) Z ⊆ V. By definition, any x ∈ Y is also


∈ U (c) and any x ∈ Z is also ∈ V (d). To show that Y ∪ Z ⊆ U ∪ V, we have
to show that if x ∈ Y ∪ Z then x ∈ U ∪ V (by definition of ⊆). x ∈ Y ∪ Z iff
x ∈ Y or x ∈ Z (by definition of ∪). Similarly, x ∈ U ∪ V iff x ∈ U or x ∈ V.
So, we have to show: for any x, if x ∈ Y or x ∈ Z, then x ∈ U or x ∈ V.

So far we’ve only unpacked definitions! We’ve reformulated our


proposition without ⊆ and ∪ and are left with trying to prove a
universal conditional claim. By what we’ve discussed above, this
is done by assuming that x is something about which we assume
the “if” part is true, and we’ll go on to show that the “then” part is
true as well. In other words, we’ll assume that x ∈ Y or x ∈ Z and
show that x ∈ U or x ∈ V.2
2 This paragraph just explains what we’re doing—it’s not part of the proof, and you don’t

have to go into all this detail when you write down your own proofs.

Release: (None) ((None)) 653


CHAPTER 50. PROOFS

Suppose that x ∈ Y or x ∈ Z. We have to show that x ∈ U or x ∈ V. We


distinguish cases.
Case 1: x ∈ Y. By (c), x ∈ U. Thus, x ∈ U or x ∈ V. (Here we’ve made the
inference discussed in the preceding subsection!)
Case 2: x ∈ Z. By (d), x ∈ V. Thus, x ∈ U or x ∈ V.

Proving an Existence Claim


When asked to prove an existence claim, the question will usually be of the
form “prove that there is an x such that . . . x . . . ”, i.e., that some object that
has the property described by “. . . x . . . ”. In this case you’ll have to identify a
suitable object show that is has the required property. This sounds straightfor-
ward, but a proof of this kind can be tricky. Typically it involves constructing
or defining an object and proving that the object so defined has the required
property. Finding the right object may be hard, proving that it has the re-
quired property may be hard, and sometimes it’s even tricky to show that
you’ve succeeded in defining an object at all!
Generally, you’d write this out by specifying the object, e.g., “let x be . . . ”
(where . . . specifies which object you have in mind), possibly proving that . . .
in fact describes an object that exists, and then go on to show that x has the
property Q. Here’s a simple example.

Proposition 50.6. Suppose that x ∈ Y. Then there is an X such that X ⊆ Y and


X 6= ∅.

Proof. Assume x ∈ Y. Let X = { x }.

Here we’ve defined the set X by enumerating its elements. Since


we assume that x is an object, and we can always form a set by
enumerating its elements, we don’t have to show that we’ve suc-
ceeded in defining a set X here. However, we still have to show
that X has the properties required by the proposition. The proof
isn’t complete without that!

Since x ∈ X, X 6= ∅.

This relies on the definition of X as { x } and the obvious facts that


/ ∅.
x ∈ { x } and x ∈

Since x is the only element of { x }, and x ∈ Y, every element of X is also


an element of Y. By definition of ⊆, X ⊆ Y.

Using Existence Claims


Suppose you know that some existence claim is true (you’ve proved it, or it’s
a hypothesis you can use), say, “for some x, x ∈ X” or “there is an x ∈ X.” If

654 Release: (None) ((None))


50.4. INFERENCE PATTERNS

you want to use it in your proof, you can just pretend that you have a name
for one of the things which your hypothesis says exist. Since X contains at
least one thing, there are things to which that name might refer. You might of
course not be able to pick one out or describe it further (other than that it is
∈ X). But for the purpose of the proof, you can pretend that you have picked
it out and give a name to it. It’s important to pick a name that you haven’t
already used (or that appears in your hypotheses), otherwise things can go
wrong. In your proof, you indicate this by going from “for some x, x ∈ X” to
“Let a ∈ X.” Now you can reason about a, use some other hypotheses, etc.,
until you come to a conclusion, p. If p no longer mentions a, p is independent
of the asusmption that a ∈ X, and you’ve shown that it follows just from the
assumption “for some x, x ∈ X.”

Proposition 50.7. If X 6= ∅, then X ∪ Y 6= ∅.

Proof. Suppose X 6= ∅. So for some x, x ∈ X.

Here we first just restated the hypothesis of the proposition. This


hypothesis, i.e., X 6= ∅, hides an existential claim, which you get
to only by unpacking a few definitions. The definition of = tells us
that X = ∅ iff every x ∈ X is also ∈ ∅ and every x ∈ ∅ is also ∈ X.
Negating both sides, we get: X 6= ∅ iff either some x ∈ X is ∈ /∅
or some x ∈ ∅ is ∈ / X. Since nothing is ∈ ∅, the second disjunct
can never be true, and “x ∈ X and x ∈ / ∅” reduces to just x ∈ X.
So x 6= ∅ iff for some x, x ∈ X. That’s an existence claim. Now
we use that existence claim by introducing a name for one of the
elements of X:

Let a ∈ X.

Now we’ve introduced a name for one of the things ∈ X. We’ll


continue to argue about a, but we’ll be careful to only assume that
a ∈ X and nothing else:

Since a ∈ X, a ∈ X ∪ Y, by definition of ∪. So for some x, x ∈ X ∪ Y, i.e.,


X ∪ Y 6= ∅.

In that last step, we went from “a ∈ X ∪ Y” to “for some x, x ∈


X ∪ Y.” That doesn’t mention a anymore, so we know that “for
some x, x ∈ X ∪ Y” follows from “for some x, x ∈ X alone.” But
that means that X ∪ Y 6= ∅.

It’s maybe good practice to keep bound variables like “x” separate from
hypothtical names like a, like we did. In practice, however, we often don’t
and just use x, like so:

Release: (None) ((None)) 655


CHAPTER 50. PROOFS

Suppose X 6= ∅, i.e., there is an x ∈ X. By definition of ∪, x ∈


X ∪ Y. So X ∪ Y 6= ∅.

However, when you do this, you have to be extra careful that you use different
x’s and y’s for different existential claims. For instance, the following is not a
correct proof of “If X 6= ∅ and Y 6= ∅ then X ∩ Y 6= ∅” (which is not true).

Suppose X 6= ∅ and Y 6= ∅. So for some x, x ∈ X and also for


some x, x ∈ Y. Since x ∈ X and x ∈ Y, x ∈ X ∩ Y, by definition
of ∩. So X ∩ Y 6= ∅.

Can you spot where the incorrect step occurs and explain why the result does
not hold?

50.5 An Example
Our first example is the following simple fact about unions and intersections
of sets. It will illustrate unpacking definitions, proofs of conjunctions, of uni-
versal claims, and proof by cases.

Proposition 50.8. For any sets X, Y, and Z, X ∪ (Y ∩ Z ) = ( X ∪ Y ) ∩ ( X ∪ Z )

Let’s prove it!

Proof. We want to show that for any sets X, Y, and Z, X ∪ (Y ∩ Z ) = ( X ∪ Y ) ∩


(X ∪ Z)

First we unpack the definition of “=” in the statement of the propo-


sition. Recall that proving sets identical means showing that the
sets have the same elements. That is, all elements of X ∪ (Y ∩ Z )
are also elements of ( X ∪ Y ) ∩ ( X ∪ Z ), and vice versa. The “vice
versa” means that also every element of ( X ∪ Y ) ∩ ( X ∪ Z ) must
be an element of X ∪ (Y ∩ Z ). So in unpacking the definition, we
see that we have to prove a conjunction. Let’s record this:

By definition, X ∪ (Y ∩ Z ) = ( X ∪ Y ) ∩ ( X ∪ Z ) iff every element of X ∪ (Y ∩ Z )


is also an element of ( X ∪ Y ) ∩ ( X ∪ Z ), and every element of ( X ∪ Y ) ∩ ( X ∪ Z )
is an element of X ∪ (Y ∩ Z ).

Since this is a conjunction, we must prove each conjunct separately.


Lets start with the first: let’s prove that every element of X ∪ (Y ∩
Z ) is also an element of ( X ∪ Y ) ∩ ( X ∪ Z ).
This is a universal claim, and so we consider an arbitrary element
of X ∪ (Y ∩ Z ) and show that it must also be an element of ( X ∪
Y ) ∩ ( X ∪ Z ). We’ll pick a variable to call this arbitrary element by,
say, z. Our proof continues:

656 Release: (None) ((None))


50.5. AN EXAMPLE

First, we prove that every element of X ∪ (Y ∩ Z ) is also an element of ( X ∪


Y ) ∩ ( X ∪ Z ). Let z ∈ X ∪ (Y ∩ Z ). We have to show that z ∈ ( X ∪ Y ) ∩ ( X ∪ Z ).

Now it is time to unpack the definition of ∪ and ∩. For instance,


the definition of ∪ is: X ∪ Y = {z : z ∈ X or z ∈ Y }. When we
apply the definition to “X ∪ (Y ∩ Z ),” the role of the “Y” in the
definition is now played by “Y ∩ Z,” so X ∪ (Y ∩ Z ) = {z : z ∈
X or z ∈ Y ∩ Z }. So our assumption that z ∈ X ∪ (Y ∩ Z ) amounts
to: z ∈ {z : z ∈ X or z ∈ Y ∩ Z }. And z ∈ {z : . . . z . . .} iff . . . z . . . ,
i.e., in this case, z ∈ X or z ∈ Y ∩ Z.

By the definition of ∪, either z ∈ X or z ∈ Y ∩ Z.

Since this is a disjunction, it will be useful to apply proof by cases.


We take the two cases, and show that in each one, the conclusion
we’re aiming for (namely, “z ∈ ( X ∪ Y ) ∩ ( X ∪ Z )”) obtains.

Case 1: Suppose that z ∈ X.

There’s not much more to work from based on our assumptions.


So let’s look at what we have to work with in the conclusion. We
want to show that z ∈ ( X ∪ Y ) ∩ ( X ∪ Z ). Based on the definition
of ∩, if we want to show that z ∈ ( X ∪ Y ) ∩ ( X ∪ Z ), we have to
show that it’s in both ( X ∪ Y ) and ( X ∪ Z ). But z ∈ X ∪ Y iff z ∈ X
or z ∈ Y, and we already have (as the assumption of case 1) that
z ∈ X. By the same reasoning—switching Z for Y—z ∈ X ∪ Z.
This argument went in the reverse direction, so let’s record our
reasoning in the direction needed in our proof.

Since z ∈ X, z ∈ X or z ∈ Y, and hence, by definition of ∪, z ∈ X ∪ Y.


Similarly, z ∈ X ∪ Z. But this means that z ∈ ( X ∪ Y ) ∩ ( X ∪ Z ), by definition
of ∩.

This completes the first case of the proof by cases. Now we want
to derive the conclusion in the second case, where z ∈ Y ∩ Z.

Case 2: Suppose that z ∈ Y ∩ Z.

Again, we are working with the intersection of two sets. Let’s ap-
ply the definition of ∩:

Since z ∈ Y ∩ Z, z must be an element of both Y and Z, by definition of ∩.

It’s time to look at our conclusion again. We have to show that z is


in both ( X ∪ Y ) and ( X ∪ Z ). And again, the solution is immediate.

Since z ∈ Y, z ∈ ( X ∪ Y ). Since z ∈ Z, also z ∈ ( X ∪ Z ). So, z ∈ ( X ∪ Y ) ∩


( X ∪ Z ).

Release: (None) ((None)) 657


CHAPTER 50. PROOFS

Here we applied the definitions of ∪ and ∩ again, but since we’ve


already recalled those definitions, and already showed that if z is
in one of two sets it is in their union, we don’t have to be as explicit
in what we’ve done.
We’ve completed the second case of the proof by cases, so now we
can assert our first conclusion.

So, if z ∈ X ∪ (Y ∩ Z ) then z ∈ ( X ∪ Y ) ∩ ( X ∪ Z ).

Now we just want to show the other direction, that every element
of ( X ∪ Y ) ∩ ( X ∪ Z ) is an element of X ∪ (Y ∩ Z ). As before, we
prove this universal claim by assuming we have an arbitrary ele-
ment of the first set and show it must be in the second set. Let’s
state what we’re about to do.

Now, assume that z ∈ ( X ∪ Y ) ∩ ( X ∪ Z ). We want to show that z ∈ X ∪ (Y ∩


Z ).

We are now working from the hypothesis that z ∈ ( X ∪ Y ) ∩ ( X ∪


Z ). It hopefully isn’t too confusing that we’re using the same z here
as in the first part of the proof. When we finished that part, all the
assumptions we’ve made there are no longer in effect, so now we
can make new assumptions about what z is. If that is confusing to
you, just replace z with a different variable in what follows.
We know that z is in both X ∪ Y and X ∪ Z, by definition of ∩. And
by the definition of ∪, we can further unpack this to: either z ∈ X
or z ∈ Y, and also either z ∈ X or z ∈ Z. This looks like a proof
by cases again—except the “and” makes it confusing. You might
think that this amounts to there being three possibilities: z is either
in X, Y or Z. But that would be a mistake. We have to be careful,
so let’s consider each disjunction in turn.

By definition of ∩, z ∈ X ∪ Y and z ∈ X ∪ Z. By definition of ∪, z ∈ X or


z ∈ Y. We distinguish cases.

Since we’re focusing on the first disjunction, we haven’t gotten our


second disjunction (from unpacking X ∪ Z) yet. In fact, we don’t
need it yet. The first case is z ∈ X, and an element of a set is also
an element of the union of that set with any other. So case 1 is easy:

Case 1: Suppose that z ∈ X. It follows that z ∈ X ∪ (Y ∩ Z ).

Now for the second case, z ∈ Y. Here we’ll unpack the second ∪
and do another proof-by-cases:

658 Release: (None) ((None))


50.6. ANOTHER EXAMPLE

Case 2: Suppose that z ∈ Y. Since z ∈ X ∪ Z, either z ∈ X or z ∈ Z. We


distinguish cases further:
Case 2a: z ∈ X. Then, again, z ∈ X ∪ (Y ∩ Z ).

Ok, this was a bit weird. We didn’t actually need the assumption
that z ∈ Y for this case, but that’s ok.

Case 2b: z ∈ Z. Then z ∈ Y and z ∈ Z, so z ∈ Y ∩ Z, and consequently,


z ∈ X ∪ (Y ∩ Z ) .

This concludes both proofs-by-cases and so we’re done with the


second half.

So, if z ∈ ( X ∪ Y ) ∩ ( X ∪ Z ) then z ∈ X ∪ (Y ∩ Z ).

50.6 Another Example


Proposition 50.9. If X ⊆ Z, then X ∪ ( Z \ X ) = Z.

Proof. Suppose that X ⊆ Z. We want to show that X ∪ ( Z \ X ) = Z.

We begin by observing that this is a conditional statement. It is


tacitly universally quantified: the proposition holds for all sets X
and Z. So X and Z are variables for arbitrary sets. To prove such a
statement, we assume the antecedent and prove the consequent.
We continue by using the assumption that X ⊆ Z. Let’s unpack
the definition of ⊆: the assumption means that all elements of X
are also elements of Z. Let’s write this down—it’s an important
fact that we’ll use throughout the proof.

By the definition of ⊆, since X ⊆ Z, for all z, if z ∈ X, then z ∈ Z.

We’ve unpacked all the definitions that are given to us in the as-
sumption. Now we can move onto the conclusion. We want to
show that X ∪ ( Z \ X ) = Z, and so we set up a proof similarly
to the last example: we show that every element of X ∪ ( Z \ X ) is
also an element of Z and, conversely, every element of Z is an ele-
ment of X ∪ ( Z \ X ). We can shorten this to: X ∪ ( Z \ X ) ⊆ Z and
Z ⊆ X ∪ ( Z \ X ). (Here we’re doing the opposite of unpacking a
definition, but it makes the proof a bit easier to read.) Since this is
a conjunction, we have to prove both parts. To show the first part,
i.e., that every element of X ∪ ( Z \ X ) is also an element of Z, we
assume that z ∈ X ∪ ( Z \ X ) for an arbitrary z and show that z ∈ Z.
By the definition of ∪, we can conclude that z ∈ X or z ∈ Z \ X
from z ∈ X ∪ ( Z \ X ). You should now be getting the hang of this.

Release: (None) ((None)) 659


CHAPTER 50. PROOFS

X ∪ ( Z \ X ) = Z iff X ∪ ( Z \ X ) ⊆ Z and Z ⊆ ( X ∪ ( Z \ X ). First we prove


that X ∪ ( Z \ X ) ⊆ Z. Let z ∈ X ∪ ( Z \ X ). So, either z ∈ X or z ∈ ( Z \ X ).

We’ve arrived at a disjunction, and from it we want to prove that


z ∈ Z. We do this using proof by cases.

Case 1: z ∈ X. Since for all z, if z ∈ X, z ∈ Z, we have that z ∈ Z.

Here we’ve used the fact recorded earlier which followed from the
hypothesis of the proposition that X ⊆ Z. The first case is com-
plete, and we turn to the second case, z ∈ ( Z \ X ). Recall that
Z \ X denotes the difference of the two sets, i.e., the set of all ele-
ments of Z which are not elements of X. But any element of Z not
in X is in particular an element of Z.

Case 2: z ∈ ( Z \ X ). This means that z ∈ Z and z ∈


/ X. So, in particular, z ∈ Z.

Great, we’ve proved the first direction. Now for the second direc-
tion. Here we prove that Z ⊆ X ∪ ( Z \ X ). So we assume that
z ∈ Z and prove that z ∈ X ∪ ( Z \ X ).

Now let z ∈ Z. We want to show that z ∈ X or z ∈ Z \ X.

Since all elements of X are also elements of Z, and Z \ X is the set of


all things that are elements of Z but not X, it follows that z is either
in X or in Z \ X. This may be a bit unclear if you don’t already
know why the result is true. It would be better to prove it step-by-
step. It will help to use a simple fact which we can state without
proof: z ∈ X or z ∈ / X. This is called the “principle of excluded
middle:” for any statement p, either p is true or its negation is true.
(Here, p is the statement that z ∈ X.) Since this is a disjunction, we
can again use proof-by-cases.

Either z ∈ X or z ∈
/ X. In the former case, z ∈ X ∪ ( Z \ X ). In the latter case,
z ∈ Z and z ∈
/ X, so z ∈ Z \ X. But then z ∈ X ∪ ( Z \ X ).

Our proof is complete: we have shown that X ∪ ( Z \ X ) = Z.

50.7 Proof by Contradiction


In the first instance, proof by contradiction is an inference pattern that is used
to prove negative claims. Suppose you want to show that some claim p is false,
i.e., you want to show ¬ p. The most promising strategy is to (a) suppose that
p is true, and (b) show that this assumption leads to something you know to
be false. “Something known to be false” may be a result that conflicts with—
contradicts—p itself, or some other hypothesis of the overall claim you are
considering. For instance, a proof of “if q then ¬ p” involves assuming that

660 Release: (None) ((None))


50.7. PROOF BY CONTRADICTION

q is true and proving ¬ p from it. If you prove ¬ p by contradiction, that means
assuming p in addition to q. If you can prove ¬q from p, you have shown that
the assumption p leads to something that contradicts your other assumption q,
since q and ¬q cannot both be true. Of course, you have to use other inference
patterns in your proof of the contradiction, as well as unpacking definitions.
Let’s consider an example.
Proposition 50.10. If X ⊆ Y and Y = ∅, then X has no elements.

Proof. Suppose X ⊆ Y and Y = ∅. We want to show that X has no elements.


Since this is a conditional claim, we assume the antecedent and
want to prove the consequent. The consequent is: X has no ele-
ments. We can make that a bit more explicit: it’s not the case that
there is an x ∈ X.
X has no elements iff it’s not the case that there is an x such that x ∈ X.
So we’ve determined that what we want to prove is really a nega-
tive claim ¬ p, namely: it’s not the case that there is an x ∈ X. To
use proof by contradiction, we have to assume the corresponding
positive claim p, i.e., there is an x ∈ X, and prove a contradiction
from it. We indicate that we’re doing a proof by contradiction by
writing “by way of contradiction, assume” or even just “suppose
not,” and then state the assumption p.
Suppose not: there is an x ∈ X.
This is now the new assumption we’ll use to obtain a contradic-
tion. We have two more assumptions: that X ⊆ Y and that Y = ∅.
The first gives us that x ∈ Y:
Since X ⊆ Y, x ∈ Y.
But since Y = ∅, every element of Y (e.g., x) must also be an ele-
ment of ∅.
Since Y = ∅, x ∈ ∅. This is a contradiction, since by definition ∅ has no
elements.
This already completes the proof: we’ve arrived at what we need
(a contradiction) from the assumptions we’ve set up, and this means
that the assumptions can’t all be true. Since the first two assump-
tions (X ⊆ Y and Y = ∅) are not contested, it must be the last
assumption introduced (there is an x ∈ X) that must be false. But
if we want to be thorough, we can spell this out.
Thus, our assumption that there is an x ∈ X must be false, hence, X has no
elements by proof by contradiction.

Release: (None) ((None)) 661


CHAPTER 50. PROOFS

Every positive claim is trivially equivalent to a negative claim: p iff ¬¬ p.


So proofs by contradiction can also be used to establish positive claims “indi-
rectly,” as follows: To prove p, read it as the negative claim ¬¬ p. If we can
prove a contradiction from ¬ p, we’ve established ¬¬ p by proof by contradic-
tion, and hence p.
In the last example, we aimed to prove a negative claim, namely that X
has no elements, and so the assumption we made for the purpose of proof
by contradiction (i.e., that there is an x ∈ X) was a positive claim. It gave
us something to work with, namely the hypothetical x ∈ X about which we
continued to reason until we got to x ∈ ∅.
When proving a positive claim indirectly, the assumption you’d make for
the purpose of proof by contradiction would be negative. But very often you
can easily reformulate a positive claim as a negative claim, and a negative
claim as a positive claim. Our previous proof would have been essentially the
same had we proved “X = ∅” instead of the negative consequent “X has no
elements.” (By definition of =, “X = ∅” is a general claim, since it unpacks to
“every element of X is an element of ∅ and vice versa”.) But it is easily seen
to be equivalent to the negative claim “not: there is an x ∈ X.”
So it is sometimes easier to work with ¬ p as an assumption than it is to
prove p directly. Even when a direct proof is just as simple or even simpler
(as in the next example), some people prefer to proceed indirectly. If the dou-
ble negation confuses you, think of a proof by contradiction of some claim as
a proof of a contradiction from the opposite claim. So, a proof by contradic-
tion of ¬ p is a proof of a contradiction from the assumption p; and proof by
contradiction of p is a proof of a contradiction from ¬ p.
Proposition 50.11. X ⊆ X ∪ Y.

Proof. We want to show that X ⊆ X ∪ Y.


On the face of it, this is a positive claim: every x ∈ X is also in
X ∪ Y. The negation of that is: some x ∈ X is ∈ / X ∪ Y. So we can
prove the claim indirectly by assuming this negated claim, and
showing that it leads to a contradiction.
Suppose not, i.e., X * X ∪ Y.
We have a definition of X ⊆ X ∪ Y: every x ∈ X is also ∈ X ∪ Y.
To understand what X * X ∪ Y means, we have to use some ele-
mentary logical manipulation on the unpacked definition: it’s false
that every x ∈ X is also ∈ X ∪ Y iff there is some x ∈ X that is

/ Z. (This is a place where you want to be very careful: many stu-
dents’ attempted proofs by contradiction fail because they analyze
the negation of a claim like “all As are Bs” incorrectly.) In other
words, X * X ∪ Y iff there is an x such that x ∈ X and x ∈ / X ∪ Y.
From then on, it’s easy.

662 Release: (None) ((None))


50.7. PROOF BY CONTRADICTION

So, there is an x ∈ X such that x ∈


/ X ∪ Y. By definition of ∪, x ∈ X ∪ Y
iff x ∈ X or x ∈ Y. Since x ∈ X, we have x ∈ X ∪ Y. This contradicts the
assumption that x ∈/ X ∪ Y.

Proposition 50.12. If X ⊆ Y and Y ⊆ Z then X ⊆ Z.

Proof. Suppose X ⊆ Y and Y ⊆ Z. We want to show X ⊆ Z.

Let’s proceed indirectly: we assume the negation of what we want


to etablish.

Suppose not, i.e., X * Z.

As before, we reason that X * Z iff not every x ∈ X is also ∈ Z,


i.e., some x ∈ X is ∈
/ Z. Don’t worry, with practice you won’t have
to think hard anymore to unpack negations like this.

In other words, there is an x such that x ∈ X and x ∈


/ Z.

Now we can use this to get to our contradiction. Of course, we’ll


have to use the other two assumptions to do it.

Since X ⊆ Y, x ∈ Y. Since Y ⊆ Z, x ∈ Z. But this contradicts x ∈


/ Z.

Proposition 50.13. If X ∪ Y = X ∩ Y then X = Y.

Proof. Suppose X ∪ Y = X ∩ Y. We want to show that X = Y.

The beginning is now routine:

Assume, by way of contradiction, that X 6= Y.

Our assumption for the proof by contradiction is that X 6= Y. Since


X = Y iff X ⊆ Y an Y ⊆ X, we get that X 6= Y iff X * Y or Y * X.
(Note how important it is to be careful when manipulating nega-
tions!) To prove a contradiction from this disjunction, we use a
proof by cases and show that in each case, a contradiction follows.

X 6= Y iff X * Y or Y * X. We distinguish cases.

In the first case, we assume X * Y, i.e., for some x, x ∈ X but ∈


/ Y.
X ∩ Y is defined as those elements that X and Y have in common,
so if something isn’t in one of them, it’s not in the intersection.
X ∪ Y is X together with Y, so anything in either is also in the
union. This tells us that x ∈ X ∪ Y but x ∈ / X ∩ Y, and hence that
X ∩ Y 6= Y ∩ X.

Release: (None) ((None)) 663


CHAPTER 50. PROOFS

Case 1: X * Y. Then for some x, x ∈ X but x ∈ / Y. Since x ∈/ Y, then


x ∈
/ X ∩ Y. Since x ∈ X, x ∈ X ∪ Y. So, X ∩ Y 6= Y ∩ X, contradicting the
assumption that X ∩ Y = X ∪ Y.
Case 2: Y * X. Then for some y, y ∈ Y but y ∈ / X. As before, we have
y ∈ X ∪ Y but y ∈
/ X ∩ Y, and so X ∩ Y 6= X ∪ Y, again contradicting X ∩ Y =
X ∪ Y.

50.8 Reading Proofs


Proofs you find in textbooks and articles very seldom give all the details we
have so far included in our examples. Authors ofen do not draw attention
to when they distinguish cases, when they give an indirect proof, or don’t
mention that they use a definition. So when you read a proof in a textbook,
you will often have to fill in those details for yourself in order to understand
the proof. Doing this is also good practice to get the hang of the various moves
you have to make in a proof. Let’s look at an example.

Proposition 50.14 (Absorption). For all sets X, Y,

X ∩ (X ∪ Y) = X

Proof. If z ∈ X ∩ ( X ∪ Y ), then z ∈ X, so X ∩ ( X ∪ Y ) ⊆ X. Now suppose


z ∈ X. Then also z ∈ X ∪ Y, and therefore also z ∈ X ∩ ( X ∪ Y ).

The preceding proof of the absorption law is very condensed. There is no


mention of any definitions used, no “we have to prove that” before we prove
it, etc. Let’s unpack it. The proposition proved is a general claim about any
sets X and Y, and when the proof mentions X or Y, these are variables for
arbitrary sets. The general claims the proof establishes is what’s required to
prove identity of sets, i.e., that every element of the left side of the identity is
an element of the right and vice versa.

“If z ∈ X ∩ ( X ∪ Y ), then z ∈ X, so X ∩ ( X ∪ Y ) ⊆ X.”

This is the first half of the proof of the identity: it estabishes that if an
arbitrary z is an element of the left side, it is also an element of the right, i.e.,
X ∩ ( X ∪ Y ) ⊆ X. Assume that z ∈ X ∩ ( X ∪ Y ). Since z is an element of
the intersection of two sets iff it is an element of both sets, we can conclude
that z ∈ X and also z ∈ X ∪ Y. In particular, z ∈ X, which is what we
wanted to show. Since that’s all that has to be done for the first half, we know
that the rest of the proof must be a proof of the second half, i.e., a proof that
X ⊆ X ∩ ( X ∪ Y ).

“Now suppose z ∈ X. Then also z ∈ X ∪ Y, and therefore also


z ∈ X ∩ ( X ∪ Y ).”

664 Release: (None) ((None))


50.9. I CAN’T DO IT!

We start by assuming that z ∈ X, since we are showing that, for any z, if


z ∈ X then z ∈ X ∩ ( X ∪ Y ). To show that z ∈ X ∩ ( X ∪ Y ), we have to show
(by definition of “∩”) that (i) z ∈ X and also (ii) z ∈ X ∪ Y. Here (i) is just
our assumption, so there is nothing further to prove, and that’s why the proof
does not mention it again. For (ii), recall that z is an element of a union of sets
iff it is an element of at least one of those sets. Since z ∈ X, and X ∪ Y is the
union of X and Y, this is the case here. So z ∈ X ∪ Y. We’ve shown both (i)
z ∈ X and (ii) z ∈ X ∪ Y, hence, by definition of “∩,” z ∈ X ∩ ( X ∪ Y ). The
proof doesn’t mention those definitions; it’s assumed the reader has already
internalized them. If you haven’t, you’ll have to go back and remind yourself
what they are. Then you’ll also have to recognize why it follows from z ∈ X
that z ∈ X ∪ Y, and from z ∈ X and z ∈ X ∪ Y that z ∈ X ∩ ( X ∪ Y ).
Here’s another version of the proof above, with everything made explicit:

Proof. [By definition of = for sets, X ∩ ( X ∪ Y ) = X we have to show (a)


X ∩ ( X ∪ Y ) ⊆ X and (b) X ∩ ( X ∪ Y ) ⊆ X. (a): By definition of ⊆, we have
to show that if z ∈ X ∩ ( X ∪ Y ), then z ∈ X.] If z ∈ X ∩ ( X ∪ Y ), then
z ∈ X [since by definition of ∩, z ∈ X ∩ ( X ∪ Y ) iff z ∈ X and z ∈ X ∪ Y],
so X ∩ ( X ∪ Y ) ⊆ X. [(b): By definition of ⊆, we have to show that if z ∈ X,
then z ∈ X ∩ ( X ∪ Y ).] Now suppose [(1)] z ∈ X. Then also [(2)] z ∈ X ∪ Y
[since by (1) z ∈ X or z ∈ Y, which by definition of ∪ means z ∈ X ∪ Y], and
therefore also z ∈ X ∩ ( X ∪ Y ) [since the definition of ∩ requires that z ∈ X,
i.e., (1), and z ∈ X ∪ Y ), i.e., (2)].

50.9 I Can’t Do It!


We all get to a point where we feel like giving up. But you can do it. Your
instructor and teaching assistant, as well as your fellow students, can help.
Ask them for help! Here are a few tips to help you avoid a crisis, and what to
do if you feel like giving up.
To make sure you can solve problems successfully, do the following:

1. Start as far in advance as possible. We get busy throughout the semester


and many of us struggle with procrastination, one of the best things you
can do is to start your homework assignments early. That way, if you’re
stuck, you have time to look for a solution (that isn’t crying).

2. Talk to your classmates. You are not alone. Others in the class may also
struggle—but the may struggle with different things. Talking it out with
your peers can give you a different perspective on the problem that
might lead to a breakthrough. Of course, don’t just copy their solution:
ask them for a hint, or explain where you get stuck and ask them for the
next step. And when you do get it, reciprocate. Helping someone else
along, and explaining things will help you understand better, too.

Release: (None) ((None)) 665


CHAPTER 50. PROOFS

3. Ask for help. You have many resources available to you—your instructor
and teaching assistant are there for you and want you to succeed. They
should be able to help you work out a problem and identify where in
the process you’re struggling.

4. Take a break. If you’re stuck, it might be because you’ve been staring at the
problem for too long. Take a short break, have a cup of tea, or work on
a different problem for a while, then return to the problem with a fresh
mind. Sleep on it.

Notice how these strategies require that you’ve started to work on the
proof well in advance? If you’ve started the proof at 2am the day before it’s
due, these might not be so helpful.
This might sound like doom and gloom, but solving a proof is a challenge
that pays off in the end. Some people do this as a career—so there must be
something to enjoy about it. Like basically everything, solving problems and
doing proofs is something that requires practice. You might see classmates
who find this easy: they’ve probably just had lots of practice already. Try not
to give in too easily.
If you do run out of time (or patience) on a particular problem: that’s ok. It
doesn’t mean you’re stupid or that you will never get it. Find out (from your
instructor or another student) how it is done, and identify where you went
wrong or got stuck, so you can avoid doing that the next time you encounter
a similar issue. Then try to do it without looking at the solution. And next
time, start (and ask for help) earlier.

50.10 Other Resources


There are many books on how to do proofs in mathematics which may be
useful. Check out How to Read and do Proofs: An Introduction to Mathematical
Thought Processes by Daniel Solow and How to Prove It: A Structured Approach
by Daniel Velleman in particular. The Book of Proof by Richard Hammack and
Mathematical Reasoning by Ted Sundstrom are books on proof that are freely
available. Philosophers might find More Precisely: The Math you need to do
Philosophy by Eric Steinhart to be a good primer on mathematical reasoning.
There are also various shorter guides to proofs available on the internet;
e.g., “Introduction to Mathematical Arguments” by Michael Hutchings and
“How to write proofs” by Eugenia Chang.

Motivational Videos
Feel like you have no motivation to do your homework? Feeling down? These
videos might help!

• https://www.youtube.com/watch?v=ZXsQAXx_ao0

666 Release: (None) ((None))


50.10. OTHER RESOURCES

• https://www.youtube.com/watch?v=BQ4yd2W50No

• https://www.youtube.com/watch?v=StTqXEQ2l-Y

Problems
Problem 50.1. Suppose you are asked to prove that X ∩ Y 6= ∅. Unpack all
the definitions occuring here, i.e., restate this in a way that does not mention
“∩”, “=”, or “∅.

Problem 50.2. Prove indirectly that X ∩ Y ⊆ X.

Problem 50.3. Expand the following proof of X ∪ ( X ∩ Y ) = X, where you


mention all the inference patterns used, why each step follows from assump-
tions or claims established before it, and where we have to appeal to which
definitions.

Proof. If z ∈ X ∪ ( X ∩ Y ) then z ∈ X or z ∈ X ∩ Y. If z ∈ X ∩ Y, z ∈ X. Any


z ∈ X is also ∈ X ∪ ( X ∩ Y ).

Release: (None) ((None)) 667


Chapter 51

Induction

51.1 Introduction

Induction is an important proof technique which is used, in different forms,


in almost all areas of logic, theoretical computer science, and mathematics. It
is needed to prove many of the results in logic.
Induction is often contrasted with deduction, and characterized as the in-
ference from the particular to the general. For instance, if we observe many
green emeralds, and nothing that we would call an emerald that’s not green,
we might conclude that all emeralds are green. This is an inductive inference,
in that it proceeds from many particlar cases (this emerald is green, that emer-
ald is green, etc.) to a general claim (all emeralds are green). Mathematical
induction is also an inference that concludes a general claim, but it is of a very
different kind that this “simple induction.”
Very roughly, an inductive proof in mathematics concludes that all math-
ematical objects of a certain sort have a certain property. In the simplest case,
the mathematical objects an inductive proof is concerned with are natural
numbers. In that case an inductive proof is used to establish that all natu-
ral numbers have some property, and it does this by showing that (1) 0 has
the property, and (2) whenever a number n has the property, so does n + 1.
Induction on natural numbers can then also often be used to prove general
about mathematical objects that can be assigned numbers. For instance, finite
sets each have a finite number n of elements, and if we can use induction to
show that every number n has the property “all finite sets of size n are . . . ”
then we will have shown something about all finite sets.
Induction can also be generalized to mathematical objects that are induc-
tively defined. For instance, expressions of a formal language suchh as those of
first-order logic are defined inductively. Structural induction is a way to prove
results about all such expressions. Structural induction, in particular, is very
useful—and widely used—in logic.

668
51.2. INDUCTION ON N

51.2 Induction on N
In its simplest form, induction is a technique used to prove results for all nat-
ural numbers. It uses the fact that by starting from 0 and repeatedly adding 1
we eventually reach every natural number. So to prove that something is true
for every number, we can (1) establish that it is true for 0 and (2) show that
whenever it is true for a number n, it is also true for the next number n + 1. If
we abbreviate “number n has property P” by P(n), then a proof by induction
that P(n) for all n ∈ N consists of:

1. a proof of P(0), and

2. a proof that, for any n, if P(n) then P(n + 1).

To make this crystal clear, suppose we have both (1) and (2). Then (1) tells us
that P(0) is true. If we also have (2), we know in particular that if P(0) then
P(0 + 1), i.e., P(1). (This follows from the general statement “for any n, if P(n)
then P(n + 1)” by putting 0 for n. So by modus ponens, we have that P(1).
From (2) again, now taking 1 for n, we have: if P(1) then P(2). Since we’ve
just established P(1), by modus ponens, we have P(2). And so on. For any
number k, after doing this k steps, we eventually arrive at P(k). So (1) and (2)
together establish P(k ) for any k ∈ N.
Let’s look at an example. Suppose we want to find out how many different
sums we can throw with n dice. Although it might seem silly, let’s start with
0 dice. If you have no dice there’s only one possible sum you can “throw”:
no dots at all, which sums to 0. So the number of different possible throws
is 1. If you have only one die, i.e., n = 1, there are six possible values, 1
through 6. With two dice, we can throw any sum from 2 through 12, that’s
11 possibilities. With three dice, we can throw any number from 3 to 18, i.e.,
16 different possibilities. 1, 6, 11, 16: looks like a pattern: maybe the answer
is 5n + 1? Of course, 5n + 1 is the maximum possible, because there are only
5n + 1 numbers between n, the lowest value you can throw with n dice (all
1’s) and 6n, the highest you can throw (all 6’s).

Theorem 51.1. With n dice one can throw all 5n + 1 possible values between n and
6n.

Proof. Let P(n) be the claim: “It is possible to throw any number between n
and 6n using n dice.” To use induction, we prove:

1. The induction basis P(1), i.e., with just one die, you can throw any num-
ber between 1 and 6.

2. The induction step, for all k, if P(k) then P(k + 1).

Release: (None) ((None)) 669


CHAPTER 51. INDUCTION

(1) Is proved by inspecting a 6-sided die. It has all 6 sides, and every num-
ber between 1 and 6 shows up one on of the sides. So it is possible to throw
any number between 1 and 6 using a single die.
To prove (2), we assume the antecedent of the conditional, i.e., P(k). This
assumption is called the inductive hypothesis. We use it to prove P(k + 1). The
hard part is to find a way of thinking about the possible values of a throw of
k + 1 dice in terms of the possible values of throws of k dice plus of throws of
the extra k + 1-st die—this is what we have to do, though, if we want to use
the inductive hypothesis.
The inductive hypothesis says we can get any number between k and 6k
using k dice. If we throw a 1 with our (k + 1)-st die, this adds 1 to the total.
So we can throw any value between k + 1 and 6k + 1 by throwing 5 dice and
then rolling a 1 with the (k + 1)-st die. What’s left? The values 6k + 2 through
6k + 6. We can get these by rolling k 6s and then a number between 2 and 6
with our (k + 1)-st die. Together, this means that with k + 1 dice we can throw
any of the numbers between k + 1 and 6(k + 1), i.e., we’ve proved P(k + 1)
using the assumption P(k), the inductive hypothesis.

Very often we use induction when we want to prove something about a


series of objects (numbers, sets, etc.) that is itself defined “inductively,” i.e.,
by defining the (n + 1)-st object in terms of the n-th. For instance, we can
define the sum sn of the natural numbers up to n by

s0 = 0
s n +1 = s n + ( n + 1 )

This definition gives:

s0 = 0,
s1 = s0 + 1 = 1,
s2 = s1 + 2 = 1+2 = 3
s3 = s2 + 3 = 1 + 2 + 3 = 6, etc.

Now we can prove, by induction, that sn = n(n + 1)/2.

Proposition 51.2. sn = n(n + 1)/2.

Proof. We have to prove (1) that s0 = 0 · (0 + 1)/2 and (2) if sn = n(n + 1)/2
then sn+1 = (n + 1)(n + 2)/2. (1) is obvious. To prove (2), we assume the
inductive hypothesis: sn = n(n + 1)/2. Using it, we have to show that sn+1 =
(n + 1)(n + 2)/2.
What is sn+1 ? By the definition, sn+1 = sn + (n + 1). By inductive hypoth-
esis, sn = n(n + 1)/2. We can substitute this into the previous equation, and

670 Release: (None) ((None))


51.3. STRONG INDUCTION

then just need a bit of arithmetic of fractions:

n ( n + 1)
s n +1 = + ( n + 1) =
2
n ( n + 1) 2( n + 1)
= + =
2 2
n ( n + 1) + 2( n + 1)
= =
2
(n + 2)(n + 1)
= .
2

The important lesson here is that if you’re proving something about some
inductively defined sequence an , induction is the obvious way to go. And
even if it isn’t (as in the case of the possibilities of dice throws), you can use
induction if you can somehow relate the case for n + 1 to the case for n.

51.3 Strong Induction


In the principle of induction discussed above, we prove P(0) and also if P(n),
then P(n + 1). In the second part, we assume that P(n) is true and use this
assumption to prove P(n + 1). Equivalently, of course, we could assume
P(n − 1) and use it to prove P(n)—the important part is that we be able to
carry out the inference from any number to its successor; that we can prove
the claim in question for any number under the assumption it holds for its
predecessor.
There is a variant of the principle of induction in which we don’t just as-
sume that the claim holds for the predecessor n − 1 of n, but for all numbers
smaller than n, and use this assumption to establish the claim for n. This also
gives us the claim P(k) for all k ∈ N. For once we have established P(0), we
have thereby established that P holds for all numbers less than 1. And if we
know that if P(l ) for all l < n then P(n), we know this in particular for n = 1.
So we can conclude P(2). With this we have proved P(0), P(1), P(2), i.e., P(l )
for all l < 3, and since we have also the conditional, if P(l ) for all l < 3, then
P(3), we can conclude P(3), and so on.
In fact, if we can establish the general conditional “for all n, if P(l ) for all
l < n, then P(n),” we do not have to establish P(0) anymore, since it follows
from it. For remember that a general claim like “for all l < n, P(l )” is true if
there are no l < n. This is a case of vacuous quantification: “all As are Bs” is
true if there are no As, ∀ x ( ϕ( x ) → ψ( x )) is true if no x satisfies ϕ( x ). In this
case, the formalized version would be “∀l (l < n → P(l ))”—and that is true if
there are no l < n. And if n = 0 that’s exactly the case: no l < 0, hence “for all
l < 0, P(0)” is true, whatever P is. A proof of “if P(l ) for all l < n, then P(n)”
thus automatically establishes P(0).

Release: (None) ((None)) 671


CHAPTER 51. INDUCTION

This variant is useful if establishing the claim for n can’t be made to just
rely on the claim for n − 1 but may require the assumption that it is true for
one or more l < n.

51.4 Inductive Definitions


In logic we very often define kinds of objects inductively, i.e., by specifying
rules for what counts as an object of the kind to be defined which explain how
to get new objects of that kind from old objects of that kind. For instance,
we often define special kinds of sequences of symbols, such as the terms and
formulas of a language, by induction. For a simple example, consider strings
of consisting of letters a, b, c, d, the symbol ◦, and brackets [ and ], such
as “[[c ◦ d][”, “[a[]◦]”, “a” or “[[a ◦ b] ◦ d]”. You probably feel that there’s
something “wrong” with the first two strings: the brackets don’t “balance” at
all in the first, and you might feel that the “◦” should “connect” expressions
that themselves make sense. The third and fourth string look better: for every
“[” there’s a closing “]” (if there are any at all), and for any ◦ we can find “nice”
expressions on either side, surrounded by a pair of parenteses.
We would like to precisely specify what counts as a “nice term.” First of
all, every letter by itself is nice. Anything that’s not just a letter by itself should
be of the form “[t ◦ s]” where s and t are themselves nice. Conversely, if t and
s are nice, then we can form a new nice term by putting a ◦ between them and
surround them by a pair of brackets. We might use these operations to define
the set of nice terms. This is an inductive definition.

Definition 51.3 (Nice terms). The set of nice terms is inductively defined as
follows:

1. Any letter a, b, c, d is a nice term.

2. If s and s0 are nice terms, then so is [s ◦ s0 ].

3. Nothing else is a nice term.

This definition tells us that something counts as a nice term iff it can be
constructed according to the two conditions (1) and (2) in some finite number
of steps. In the first step, we construct all nice terms just consisting of letters
by themselves, i.e.,
a, b, c, d
In the second step, we apply (2) to the terms we’ve constructed. We’ll get

[a ◦ a], [a ◦ b], [b ◦ a], . . . , [d ◦ d]

for all combinations of two letters. In the third step, we apply (2) again, to any
two nice terms we’ve constructed so far. We get new nice term such as [a ◦ [a ◦

672 Release: (None) ((None))


51.4. INDUCTIVE DEFINITIONS

a]]—where t is a from step 1 and s is [a ◦ a] from step 2—and [[b ◦ c] ◦ [d ◦ b]]


constructed out of the two terms [b ◦ c] and [d ◦ b] from step 2. And so on.
Clause (3) rules out that anything not constructed in this way sneaks into the
set of nice terms.
Note that we have not yet proved that every sequence of symbols that
“feels” nice is nice according to this definition. However, it should be clear
that everything we can construct does in fact “feel nice:” brackets are bal-
anced, and ◦ connects parts that are themselves nice.
The key feature of inductive definitions is that if you want to prove some-
thing about all nice terms, the definition tells you which cases you must con-
sider. For instance, if you are told that t is a nice term, the inductive definition
tells you what t can look like: t can be a letter, or it can be [r ◦ s] for some other
pair of nice terms r and s. Because of clause (3), those are the only possibilities.
When proving claims about all of an inductively defined set, the strong
form of induction becomes particularly important. For instance, suppose we
want to prove that for every nice term of length n, the number of [ in it is <
n/2. This can be seen as a claim about all n: for every n, the number of [ in
any nice term of length n is < n/2.

Proposition 51.4. For any n, the number of [ in a nice term of length n is < n/2.

Proof. To prove this result by (strong) induction, we have to show that the
following conditional claim is true:

If for every k < n, any parexpression of length k has k/2 [’s, then
any parexpression of length n has n/2 [’s.

To show this conditional, assume that its antecedent is true, i.e., assume that
for any k < n, parexpressions of length k contain < k/2 [’s. We call this
assumption the inductive hypothesis. We want to show the same is true for
parexpressions of length n.
So suppose t is a nice term of length n. Because parexpressions are induc-
tively defined, we have three two cases: (1) t is a letter by itself, or t is [r ◦ s]
for some nice terms r and s.

1. t is a letter. Then n = 1, and the number of [ in t is 0. Since 0 < 1/2, the


claim holds.

2. t is [s ◦ s0 ] for some nice terms s and s0 . Let’s let k be the length of s and
k0 be the length of s0 . Then the length n of t is k + k0 + 3 (the lengths of s
and s0 plus three symbols [, ◦, ]). Since k + k0 + 3 is always greater than
k, k < n. Similarly, k0 < n. That means that the induction hypothesis
applies to the terms s and s0 : the number m of [ in s is < k/2, and the
number of [ in s0 is < k0 /2.

Release: (None) ((None)) 673


CHAPTER 51. INDUCTION

The number of [ in t is the number of [ in s, plus the number of [ in s0 ,


plus 1, i.e., it is m + m0 + 1. Since m < k/2 and m0 < k0 /2 we have:

k k0 k + k0 + 2 k + k0 + 3
m + m0 + 1 < + +1 = < = n/2.
2 2 2 2

In each case, we’ve shown that the number of [ in t is < n/2 (on the basis of
the inductive hypothesis). By strong induction, the proposition follows.

51.5 Structural Induction


So far we have used induction to establish results about all natural numbers.
But a corresponding principle can be used directly to prove results about all
elements of an inductively defined set. This often called structural induction,
because it depends on the structure of the inductively defined objects.
Generally, an inductive definition is given by (a) a list of “initial” elements
of the set and (b) a list of operations which produce new elements of the set
from old ones. In the case of nice terms, for instance, the initial objects are the
letters. We only have one operation: the operations are

o (s, s0 ) =[s ◦ s0 ]

You can even think of the natural numbers N themselves as being given be an
inductive definition: the initial object is 0, and the operation is the successor
function x + 1.
In order to prove something about all elements of an inductively defined
set, i.e., that every element of the set has a property P, we must:

1. Prove that the initial objects have P

2. Prove that for each operation o, if the arguments have P, so does the
result.

For instance, in order to prove something about all nice terms, we would
prove that it is true about all letters, and that it is true about [s ◦ s0 ] provided
it is true of s and s0 individually.

Proposition 51.5. The number of [ equals the number of ] in any nice term t.

Proof. We use structural induction. Nice terms are inductively defined, with
letters as initial objects and the operations o for constructing new nice terms
out of old ones.

1. The claim is true for every letter, since the number of [ in a letter by itself
is 0 and the number of ] in it is also 0.

674 Release: (None) ((None))


51.6. RELATIONS AND FUNCTIONS

2. Suppose the number of [ in s equals the number of ], and the same is true
for s0 . The number of [ in o (s, s0 ), i.e., in [s ◦ s0 ], is the sum of the number
of [ in s and s0 . The number of ] in o (s, s0 ) is the sum of the number of
] in s and s0 . Thus, the number of [ in o (s, s0 ) equals the number of ] in
o (s, s0 ).

Let’s give another proof by structural induction: a proper initial segment


of a string of symbols t is any string t0 that agrees with t symbol by symbol,
read from the left, but t0 is longer. So, e.g., [ a ◦ is a proper initial segment
of [ a ◦ b], but neither are [b ◦ (they disagree at the second symbol) nor [ a ◦ b]
(they are the same length).

Proposition 51.6. Every proper initial segment of a nice term t has more [’s than ]’s.

Proof. By induction on t:

1. t is a letter by itself: Then t has no proper initial segments.

2. t = [s ◦ s0 ] for some nice terms s and s0 . If r is a proper initial segment of


t, there are a number of possibilities:

a) r is just [: Then r has one more [ than it does ].


b) r is [r 0 where r 0 is a proper initial segment of s: Since s is a nice term,
by induction hypothesis, r 0 has more [ than ] and the same is true
for [r 0 .
c) r is [s or [s ◦ : By the previous result, the number of [ and ] in s is
equal; so the number of [ in [s or [s ◦ is one more than the number
of ].
d) r is [s ◦ r 0 where r 0 is a proper initial segment of s0 : By induction
hypothesis, r 0 contains more [ than ]. By the previous result, the
number of [ and of ] in s is equal. So the number of [ in [s ◦ r 0 is
greater than the number of ].
e) r is [s ◦ s0 : By the previous result, the number of [ and ] in s is equal,
and the same for s0 . So there is one more [ in [s ◦ s0 than there are ].

51.6 Relations and Functions


When we have defined a set of objects (such as the natural numbers or the nice
terms) inductively, we can also define relations on these objects by induction.
For instance, consider the following idea: a nice term t is a subterm of a nice
term t0 if it occurs as a part of it. Let’s use a symbol for it: t v t0 . Every nice

Release: (None) ((None)) 675


CHAPTER 51. INDUCTION

term is a subterm of itself, of course: t v t. We can give an inductive definition


of this relation as follows:

Definition 51.7. The relation of a nice term t being a subterm of t0 , t v t0 , is


defined by induction on s0 as follows:

1. If t0 is a letter, then t v t0 iff t = t0 .

2. If t0 is [s ◦ s0 ], then t v t0 iff t = t0 , t v s, or t v s0 .

This definition, for instance, will tell us that a v [b ◦ a]. For (2) says that
a v [b ◦ a] iff a = [b ◦ a], or a v b, or a v a. The first two are false: a
clearly isn’t identical to [b ◦ a], and by (1), a v b iff a = b, which is also false.
However, also by (1), a v a iff a = a, which is true.
It’s important to note that the success of this definition depends on a fact
that we haven’t proved yet: every nice term t is either a letter by itself, or there
are uniquely determined nice terms s and s0 such that t = [s ◦ s0 ]. “Uniquely
determined” here means that if t = [s ◦ s0 ] it isn’t also = [r ◦ r 0 ] with s 6= r or
s0 6= r 0 . If this were the case, then clause (2) may come in conflict with itself:
reading t0 as [s ◦ s0 ] we might get t v t0 , but if we read t0 as [r ◦ r 0 ] we might
get not t v t0 . Before we prove that this can’t happen, let’s look at an example
where it can happen.

Definition 51.8. Define bracketless terms inductively by

1. Every letter is a bracketles term.

2. If s and s0 are bracketless terms, then s ◦ s0 is a bracketless term.

3. Nothing else is a bracketless term.

Bracketless terms are, e.g., a, b ◦ d, b ◦ a ◦ b. Now if we defined “subterm”


for bracketless terms the way we did above, the second clause would read

If t0 = s ◦ s0 , then t v t0 iff t = t0 , t v s, or t v s0 .

Now b ◦ a ◦ b is of the form s ◦ s0 with s = b and s0 = a ◦ b. It is also of the


form r ◦ r 0 with r = b ◦ a and r 0 = b. Now is a ◦ b a subterm of b ◦ a ◦ b? The
answer is yes if we go by the first reading, and no if we go by the second.
The property that the way a nice term is built up from other nice terms is
unique is called unique readability. Since inductive definitions of relations for
such inductively defined objects are important, we have to prove that it holds.

Proposition 51.9. Suppose t is a nice term. Then either t is a letter by itself, or there
are uniquely determined nice terms s, s0 such that t = [s ◦ s0 ].

676 Release: (None) ((None))


51.6. RELATIONS AND FUNCTIONS

Proof. If t is a letter by itself, the condition issatisfied. So assume t isn’t a letter


by itself. We can tell from the inductive definition that then t must be of the
form [s ◦ s0 ] for some nice terms s and s0 . It remains to show that these are
uniquely determined, i.e., if t = [r ◦ r 0 ], then s = r and s0 = r 0 .
So suppose t = [s ◦ s0 ] and t = [r ◦ r 0 ] for nice terms s, s0 , r, r 0 . We have to
show that s = r and s0 = r 0 . First, s and r must be identical, for otherwise one
is a proper initial segment of the other. But by ??, that is impossible if s and r
are both nice terms. But if s = r, then clearly also s0 = r 0 .

We can also define functions inductively: e.g., we can define the function f
that maps any nice term to the maximum depth of nested [. . . ] in it as follows:
Definition 51.10. The depth of a nice term, f (t), is defined inductively as fol-
lows:
f (s) = 0 if s is a letter
f ([s ◦ s0 ] = max( f (s), f (s0 )) + 1
For instance
f ([a ◦ b]) = max( f (a), f (b)) + 1 =
= max(0, 0) + 1 = 1, and
f ([[a ◦ b] ◦ c]) = max( f ([a ◦ b]), f (c)) + 1 =
= max(1, 0) + 1 = 2.
Here, of course, we assume that s an s0 are nice terms, and make use of
the fact that every nice term is either a letter or of the form [s ◦ s0 ]. It is again
important that it can be of this form in only one way. To see why, consider
again the bracketless terms we defined earlier. The corresponding “defini-
tion” would be:
g(s) = 0 if s is a letter
g(s ◦ s0 ) = max( g(s), g(s0 )) + 1
Now consider the bracketless term a ◦ b ◦ c ◦ d. It can be read in more than
one way, e.g., as s ◦ s0 with s = a and s0 = b ◦ c ◦ d, or as r ◦ r 0 with r = a ◦ b
and r 0 = c ◦ d. Calculating g according to the first way of reading it would
give
g(s ◦ s0 ) = max( g(a), g(b ◦ c ◦ d)) + 1 =
= max(0, 2) + 1 = 3

while according to the other reading we get

g(r ◦ r 0 ) = max( g(a ◦ b), g(c ◦ d)) + 1 =


= max(1, 1) + 1 = 2

Release: (None) ((None)) 677


CHAPTER 51. INDUCTION

But a function must always yield a unique value; so our “definition” of g


doesn’t define a function at all.

Problems
Problem 51.1. Define the set of supernice terms by

1. Any letter a, b, c, d is a supernice term.

2. If s is a supernice term, then so is [s].

3. If t and s are supernice terms, then so is [t ◦ s].

4. Nothing else is a supernice term.

Show that the number of [ in a supernice term s of length n is ≤ n/2 + 1.

Problem 51.2. Prove by structural induction that no nice term starts with ].

Problem 51.3. Give an inductive definition of the function l, where l (t) is the
number of symbols in the nice term t.

Problem 51.4. Prove by induction on nice terms t that f (t) < l (t) (where l (t)
is the number of symbols in t and f (t) is the depth of t as defined in ??).

678 Release: (None) ((None))


Part XIII

History

679
Chapter 52

Biographies

52.1 Georg Cantor

An early biography of Georg Cantor (GAY-org KAHN-tor) claimed that he was


born and found on a ship that was sailing for Saint Petersburg, Russia, and
that his parents were unknown. This, however, is not true; although he was
born in Saint Petersburg in 1845.
Cantor received his doctorate in mathematics at the University of Berlin in
1867. He is known for his work in set theory, and is credited with founding
set theory as a distinctive research discipline. He was the first to prove that
there are infinite sets of different sizes. His theories, and especially his theory
of infinities, caused much debate among mathematicians at the time, and his
work was controversial.
Cantor’s religious beliefs and his mathematical work were inextricably
tied; he even claimed that the theory of transfinite numbers had been com-
municated to him directly by God. In later life, Cantor suffered from mental
illness. Beginning in 1984, and more frequently towards his later years, Can-
tor was hospitalized. The heavy criticism of his work, including a falling out
with the mathematician Leopold Kronecker, led to depression and a lack of
interest in mathematics. During depressive episodes, Cantor would turn to
philosophy and literature, and even published a theory that Francis Bacon
was the author of Shakespeare’s plays.
Cantor died on January 6, 1918, in a sanatorium in Halle.

Further Reading For full biographies of Cantor, see ? and ?. Cantor’s rad-
ical views are also described in the BBC Radio 4 program A Brief History of
Mathematics (?). If you’d like to hear about Cantor’s theories in rap form, see
?.

680
52.2. ALONZO CHURCH

52.2 Alonzo Church


Alonzo Church was born in Washington, DC on June 14, 1903. In early child-
hood, an air gun incident left Church blind in one eye. He finished prepara-
tory school in Connecticut in 1920 and began his university education at Prince-
ton that same year. He completed his doctoral studies in 1927. After a couple
years abroad, Church returned to Princeton. Church was known exceedingly
polite and careful. His blackboard writing was immaculate, and he would
preserve important papers by carefully covering them in Duco cement. Out-
side of his academic pursuits, he enjoyed reading science fiction magazines
and was not afraid to write to the editors if he spotted any inaccuracies in the
writing.
Church’s academic achievements were great. Together with his students
Stephen Kleene and Barkley Rosser, he developed a theory of effective calcu-
lability, the lambda calculus, independently of Alan Turing’s development of
the Turing machine. The two definitions of computability are equivalent, and
give rise to what is now known as the Church-Turing Thesis, that a function of
the natural numbers is effectively computable if and only if it is computable
via Turing machine (or lambda calculus). He also proved what is now known
as Church’s Theorem: The decision problem for the validity of first-order for-
mulas is unsolvable.
Church continued his work into old age. In 1967 he left Princeton for
UCLA, where he was professor until his retirement in 1990. Church passed
away on August 1, 1995 at the age of 92.

Further Reading For a brief biography of Church, see ?. Church’s origi-


nal writings on the lambda calculus and the Entscheidungsproblem (Church’s
Thesis) are ??. ? records an interview with Church about the Princeton math-
ematics community in the 1930s. Church wrote a series of book reviews of the
Journal of Symbolic Logic from 1936 until 1979. They are all archived on John
MacFarlane’s website (?).

52.3 Gerhard Gentzen


Gerhard Gentzen is known primarily as the creator of structural proof the-
ory, and specifically the creation of the natural deduction and sequent calcu-
lus proof systems. He was born on November 24, 1909 in Greifswald, Ger-
many. Gerhard was homeschooled for three years before attending prepara-
tory school, where he was behind most of his classmates in terms of educa-
tion. Despite this, he was a brilliant student and showed a strong aptitude for
mathematics. His interests were varied, and he, for instance, also write poems
for his mother and plays for the school theatre.

Release: (None) ((None)) 681


CHAPTER 52. BIOGRAPHIES

Gentzen began his university studies at the University of Greifswald, but


moved around to Göttingen, Munich, and Berlin. He received his doctorate in
1933 from the University of Göttingen under Hermann Weyl. (Paul Bernays
supervised most of his work, but was dismissed from the university by the
Nazis.) In 1934, Gentzen began work as an assistant to David Hilbert. That
same year he developed the sequent calculus and natural deduction proof sys-
tems, in his papers Untersuchungen über das logische Schließen I–II [Investigations
Into Logical Deduction I–II]. He proved the consistency of the Peano axioms in
1936.
Gentzen’s relationship with the Nazis is complicated. At the same time his
mentor Bernays was forced to leave Germany, Gentzen joined the university
branch of the SA, the Nazi paramilitary organization. Like many Germans, he
was a member of the Nazi party. During the war, he served as a telecommuni-
cations officer for the air intelligence unit. However, in 1942 he was released
from duty due to a nervous breakdown. It is unclear whether or not Gentzen’s
loyalties lay with the Nazi party, or whether he joined the party in order to en-
sure academic success.
In 1943, Gentzen was offered an academic position at the Mathematical
Institute of the German University of Prague, which he accepted. However, in
1945 the citizens of Prague revolted against German occupation. Soviet forces
arrived in the city and arrested all the professors at the university. Because of
his membership in Nazi organizations, Gentzen was taken to a forced labour
camp. He died of malnutrition while in his cell on August 4, 1945 at the age
of 35.

Further Reading For a full biography of Gentzen, see ?. An interesting


read about mathematicians under Nazi rule, which gives a brief note about
Gentzen’s life, is given by ?. Gentzen’s papers on logical deduction are avail-
able in the original german (??). English translations of Gentzen’s papers have
been collected in a single volume by ?, which also includes a biographical
sketch.

52.4 Kurt Gödel


Kurt Gödel (GER-dle) was born on April 28, 1906 in Brünn in the Austro-
Hungarian empire (now Brno in the Czech Republic). Due to his inquisitive
and bright nature, young Kurtele was often called “Der kleine Herr Warum”
(Little Mr. Why) by his family. He excelled in academics from primary school
onward, where he got less than the highest grade only in mathematics. Gödel
was often absent from school due to poor health and was exempt from physi-
cal education. He was diagnosed with rheumatic fever during his childhood.
Throughout his life, he believed this permanently affected his heart despite
medical assessment saying otherwise.

682 Release: (None) ((None))


52.5. EMMY NOETHER

Gödel began studying at the University of Vienna in 1924 and completed


his doctoral studies in 1929. He first intended to study physics, but his inter-
ests soon moved to mathematics and especially logic, in part due to the influ-
ence of the philosopher Rudolf Carnap. His dissertation, written under the
supervision of Hans Hahn, proved the completeness theorem of first-order
predicate logic with identity (?). Only a year later, he obtained his most fa-
mous results—the first and second incompleteness theorems (published in
?). During his time in Vienna, Gödel was heavily involved with the Vienna
Circle, a group of scientifically-minded philosophers that included Carnap,
whose work was especially influenced by Gödel’s results.
In 1938, Gödel married Adele Nimbursky. His parents were not pleased:
not only was she six years older than him and already divorced, but she
worked as a dancer in a nightclub. Social pressures did not affect Gödel, how-
ever, and they remained happily married until his death.
After Nazi Germany annexed Austria in 1938, Gödel and Adele emigrated
to the United States, where he took up a position at the Institute for Advanced
Study in Princeton, New Jersey. Despite his introversion and eccentric nature,
Gödel’s time at Princeton was collaborative and fruitful. He published essays
in set theory, philosophy and physics. Notably, he struck up a particularly
strong friendship with his colleague at the IAS, Albert Einstein.
In his later years, Gödel’s mental health deteriorated. His wife’s hospi-
talization in 1977 meant she was no longer able to cook his meals for him.
Having suffered from mental health issues throughout his life, he succumbed
to paranoia. Deathly afraid of being poisoned, Gödel refused to eat. He died
of starvation on January 14, 1978, in Princeton.

Further Reading For a complete biography of Gödel’s life is available, see ?.


For further biographical pieces, as well as essays about Gödel’s contributions
to logic and philosophy, see ?, ?, ?, and ?.
Gödel’s PhD thesis is available in the original German (?). The original
text of the incompleteness theorems is (?). All of Gödel’s published and un-
published writings, as well as a selection of correspondence, are available in
English in his Collected Papers ??.
For a detailed treatment of Gödel’s incompleteness theorems, see ?. For
an informal, philosophical discussion of Gödel’s theorems, see Mark Linsen-
mayer’s podcast (?).

52.5 Emmy Noether


Emmy Noether (NER-ter) was born in Erlangen, Germany, on March 23, 1882,
to an upper-middle class scholarly family. Hailed as the “mother of modern
algebra,” Noether made groundbreaking contributions to both mathematics
and physics, despite significant barriers to women’s education. In Germany at

Release: (None) ((None)) 683


CHAPTER 52. BIOGRAPHIES

the time, young girls were meant to be educated in arts and were not allowed
to attend college preparatory schools. However, after auditing classes at the
Universities of Göttingen and Erlangen (where her father was professor of
mathematics), Noether was eventually able to enrol as a student at Erlangen
in 1904, when their policy was updated to allow female students. She received
her doctorate in mathematics in 1907.
Despite her qualifications, Noether experienced much resistance during
her career. From 1908–1915, she taught at Erlangen without pay. During this
time, she caught the attention of David Hilbert, one of the world’s foremost
mathematicians of the time, who invited her to Göttingen. However, women
were prohibited from obtaining professorships, and she was only able to lec-
ture under Hilbert’s name, again without pay. During this time she proved
what is now known as Noether’s theorem, which is still used in theoretical
physics today. Noether was finally granted the right to teach in 1919. Hilbert’s
response to continued resistance of his university colleagues reportedly was:
“Gentlemen, the faculty senate is not a bathhouse.”
In the later 1920s, she concentrated on work in abstract algebra, and her
contributions revolutionized the field. In her proofs she often made use of
the so-called ascending chain condition, which states that there is no infinite
strictly increasing chain of certain sets. For instance, certain algebraic struc-
tures now known as Noetherian rings have the property that there are no
infinite sequences of ideals I1 ( I2 ( . . . . The condition can be generalized to
any partial order (in algebra, it concerns the special case of ideals ordered by
the subset relation), and we can also consider the dual descending chain con-
dition, where every strictly decreasing sequence in a partial order eventually
ends. If a partial order satisfies the descending chain condition, it is possible
to use induction along this order in a similar way in which we can use induc-
tion along the < order on N. Such orders are called well-founded or Noetherian,
and the corresponding proof principle Noetherian induction.
Noether was Jewish, and when the Nazis came to power in 1933, she was
dismissed from her position. Luckily, Noether was able to emigrate to the
United States for a temporary position at Bryn Mawr, Pennsylvania. During
her time there she also lectured at Princeton, although she found the univer-
sity to be unwelcoming to women (?, 81). In 1935, Noether underwent an
operation to remove a uterine tumour. She died from an infection as a result
of the surgery, and was buried at Bryn Mawr.

Further Reading For a biography of Noether, see ?. The Perimeter Institute


for Theoretical Physics has their lectures on Noether’s life and influence avail-
able online (?). If you’re tired of reading, Stuff You Missed in History Class has
a podcast on Noether’s life and influence (?). The collected works of Noether
are available in the original German (?).

684 Release: (None) ((None))


52.6. RÓZSA PÉTER

52.6 Rózsa Péter


Rózsa Péter was born Rósza Politzer, in Budapest, Hungary, on February 17,
1905. She is best known for her work on recursive functions, which was es-
sential for the creation of the field of recursion theory.
Péter was raised during harsh political times—WWI raged when she was
a teenager—but was able to attend the affluent Maria Terezia Girls’ School in
Budapest, from where she graduated in 1922. She then studied at Pázmány
Péter University (later renamed Loránd Eötvös University) in Budapest. She
began studying chemistry at the insistence of her father, but later switched
to mathematics, and graduated in 1927. Although she had the credentials to
teach high school mathematics, the economic situation at the time was dire
as the Great Depression affected the world economy. During this time, Péter
took odd jobs as a tutor and private teacher of mathematics. She eventually
returned to university to take up graduate studies in mathematics. She had
originally planned to work in number theory, but after finding out that her re-
sults had already been proven, she almost gave up on mathematics altogether.
She was encouraged to work on Gödel’s incompleteness theorems, and un-
knowingly proved several of his results in different ways. This restored her
confidence, and Péter went on to write her first papers on recursion theory,
inspired by David Hilbert’s foundational program. She received her PhD in
1935, and in 1937 she became an editor for the Journal of Symbolic Logic.
Péter’s early papers are widely credited as founding contributions to the
field of recursive function theory. In ?, she investigated the relationship be-
tween different kinds of recursion. In ?, she showed that a certain recur-
sively defined function is not primitive recursive. This simplified an ear-
lier result due to Wilhelm Ackermann. Péter’s simplified function is what’s
now often called the Ackermann function—and sometimes, more properly,
the Ackermann-Péter function. She wrote the first book on recursive function
theory (?).
Despite the importance and influence of her work, Péter did not obtain a
full-time teaching position until 1945. During the Nazi occupation of Hungary
during World War II, Péter was not allowed to teach due to anti-Semitic laws.
In 1944 the government created a Jewish ghetto in Budapest; the ghetto was
cut off from the rest of the city and attended by armed guards. Péter was
forced to live in the ghetto until 1945 when it was liberated. She then went on
to teach at the Budapest Teachers Training College, and from 1955 onward at
Eötvös Loránd University. She was the first female Hungarian mathematician
to become an Academic Doctor of Mathematics, and the first woman to be
elected to the Hungarian Academy of Sciences.
Péter was known as a passionate teacher of mathematics, who preferred
to explore the nature and beauty of mathematical problems with her students
rather than to merely lecture. As a result, she was affectionately called “Aunt
Rosa” by her students. Péter died in 1977 at the age of 71.

Release: (None) ((None)) 685


CHAPTER 52. BIOGRAPHIES

Further Reading For more biographical reading, see (?) and (?). ? conducted
a brief interview with Péter. For a fun read about mathematics, see Péter’s
book Playing With Infinity (?).

52.7 Julia Robinson


Julia Bowman Robinson was an American mathematician. She is known mainly
for her work on decision problems, and most famously for her contributions to
the solution of Hilbert’s tenth problem. Robinson was born in St. Louis, Mis-
souri on December 8, 1919. At a young age Robinson recalls being intrigued
by numbers (?, 4). At age nine she contracted scarlet fever and suffered from
several recurrent bouts of rheumatic fever. This forced her to spend much of
her time in bed, putting her behind in her education. Although she was able
to catch up with the help of private tutors, the physical effects of her illness
had a lasting impact on her life.
Despite her childhood struggles, Robinson graduated high school with
several awards in mathematics and the sciences. She started her university
career at San Diego State College, and transferred to the University of Califor-
nia, Berkeley as a senior. There she was highly influenced by mathematician
Raphael Robinson. They quickly became good friends, and married in 1941.
As a spouse of a faculty member, Robinson was barred from teaching in the
mathematics department at Berkeley. Although she continued to audit mathe-
matics classes, she hoped to leave university and start a family. Not long after
her wedding, however, Robinson contracted pneumonia. She was told that
there was substantial scar tissue build up on her heart due to the rheumatic
fever she suffered as a child. Due to the severity of the scar tissue, the doctor
predicted that she would not live past forty and she was advised not to have
children (?, 13).
Robinson was depressed for a long time, but eventually decided to con-
tinue studying mathematics. She returned to Berkeley and completed her PhD
in 1948 under the supervision of Alfred Tarski. The first-order theory of the
real numbers had been shown to be decidable by Tarski, and from Gödel’s
work it followed that the first-order theory of the natural numbers is unde-
cidable. It was a major open problem whether the first-order theory of the
rationals is decidable or not. In her thesis (?), Robinson proved that it was not.
Interested in decision problems, Robinson next attempted to find a solu-
tion Hilbert’s tenth problem. This problem was one of a famous list of 23
mathematical problems posed by David Hilbert in 1900. The tenth problem
asks whether there is an algorithm that will answer, in a finite amount of
time, whether or not a polynomial equation with integer coefficients, such as
3x2 − 2y + 3 = 0, has a solution in the integers. Such questions are known as
Diophantine problems. After some initial successes, Robinson joined forces with
Martin Davis and Hilary Putnam, who were also working on the problem.

686 Release: (None) ((None))


52.8. BERTRAND RUSSELL

They succeeded in showing that exponential Diophantine problems (where


the unknowns may also appear as exponents) are undecidable, and showed
that a certain conjecture (later called “J.R.”) implies that Hilbert’s tenth prob-
lem is undecidable (?). Robinson continued to work on the problem for the
next decade. In 1970, the young Russian mathematician Yuri Matijasevich
finally proved the J.R. hypothesis. The combined result is now called the
Matijasevich-Robinson-Davis-Putnam theorem, or MDRP theorem for short.
Matijasevich and Robinson became friends and collaborated on several pa-
pers. In a letter to Matijasevich, Robinson once wrote that “actually I am very
pleased that working together (thousands of miles apart) we are obviously
making more progress than either one of us could alone” (?, 45).
Robinson was the first female president of the American Mathematical So-
ciety, and the first woman to be elected to the National Academy of Science.
She died on July 30, 1985 at the age of 65 after being diagnosed with leukemia.

Further Reading Robinson’s mathematical papers are available in her Col-


lected Works (?), which also includes a reprint of her National Academy of Sci-
ences biographical memoir (?). Robinson’s older sister Constance Reid pub-
lished an “Autobiography of Julia,” based on interviews (?), as well as a full
memoir (?). A short documentary about Robinson and Hilbert’s tenth prob-
lem was directed by George Csicsery (?). For a brief memoir about Yuri Mati-
jasevich’s collaborations with Robinson, and her influence on his work, see
(?).

52.8 Bertrand Russell


Bertrand Russell is hailed as one of the founders of modern analytic philoso-
phy. Born May 18, 1872, Russell was not only known for his work in philos-
ophy and logic, but wrote many popular books in various subject areas. He
was also an ardent political activist throughout his life.
Russell was born in Trellech, Monmouthshire, Wales. His parents were
members of the British nobility. They were free-thinkers, and even made
friends with the radicals in Boston at the time. Unfortunately, Russell’s par-
ents died when he was young, and Russell was sent to live with his grandpar-
ents. There, he was given a religious upbringing (something his parents had
wanted to avoid at all costs). His grandmother was very strict in all matters of
morality. During adolescence he was mostly homeschooled by private tutors.
Russell’s influence in analytic philosophy, and especially logic, is tremen-
dous. He studied mathematics and philosophy at Trinity College, Cambridge,
where he was influenced by the mathematician and philosopher Alfred North
Whitehead. In 1910, Russell and Whitehead published the first volume of
Principia Mathematica, where they championed the view that mathematics is

Release: (None) ((None)) 687


CHAPTER 52. BIOGRAPHIES

reducible to logic. He went on to publish hundreds of books, essays and po-


litical pamphlets. In 1950, he won the Nobel Prize for literature.
Russell’s was deeply entrenched in politics and social activism. During
World War I he was arrested and sent to prison for six months due to pacifist
activities and protest. While in prison, he was able to write and read, and
claims to have found the experience “quite agreeable.” He remained a pacifist
throughout his life, and was again incarcerated for attending a nuclear disar-
mament rally in 1961. He also survived a plane crash in 1948, where the only
survivors were those sitting in the smoking section. As such, Russell claimed
that he owed his life to smoking. Russell was married four times, but had a
reputation for carrying on extra-marital affairs. He died on February 2, 1970
at the age of 97 in Penrhyndeudraeth, Wales.

Further Reading Russell wrote an autobiography in three parts, spanning


his life from 1872–1967 (???). The Bertrand Russell Research Centre at Mc-
Master University is home of the Bertrand Russell archives. See their website
at ?, for information on the volumes of his collected works (including search-
able indexes), and archival projects. Russell’s paper On Denoting (?) is a classic
of 20th century analytic philosophy.
The Stanford Encyclopedia of Philosophy entry on Russell (?) has sound
clips of Russell speaking on Desire and Political theory. Many video inter-
views with Russell are available online. To see him talk about smoking and
being involved in a plane crash, e.g., see ?. Some of Russell’s works, including
his Introduction to Mathematical Philosophy are available as free audiobooks on
?.

52.9 Alfred Tarski


Alfred Tarski was born on January 14, 1901 in Warsaw, Poland (then part of
the Russian Empire). Often described as “Napoleonic,” Tarski was boisterous,
talkative, and intense. His energy was often reflected in his lectures—he once
set fire to a wastebasket while disposing of a cigarette during a lecture, and
was forbidden from lecturing in that building again.
Tarski had a thirst for knowledge from a young age. Although later in
life he would tell students that he studied logic because it was the only class
in which he got a B, his high school records show that he got A’s across the
board—even in logic. He studied at the University of Warsaw from 1918 to
1924. Tarski first intended to study biology, but became interested in mathe-
matics, philosophy, and logic, as the university was the center of the Warsaw
School of Logic and Philosophy. Tarski earned his doctorate in 1924 under the
supervision of Stanisław Leśniewski.
Before emigrating to the United States in 1939, Tarski completed some of
his most important work while working as a secondary school teacher in War-

688 Release: (None) ((None))


52.10. ALAN TURING

saw. His work on logical consequence and logical truth were written during
this time. In 1939, Tarski was visiting the United States for a lecture tour. Dur-
ing his visit, Germany invaded Poland, and because of his Jewish heritage,
Tarski could not return. His wife and children remained in Poland until the
end of the war, but were then able to emigrate to the United States as well.
Tarski taught at Harvard, the College of the City of New York, and the Insti-
tute for Advanced Study at Princeton, and finally the University of California,
Berkeley. There he founded the multidisciplinary program in Logic and the
Methodology of Science. Tarski died on October 26, 1983 at the age of 82.

Further Reading For more on Tarski’s life, see the biography Alfred Tarski:
Life and Logic (?). Tarski’s seminal works on logical consequence and truth are
available in English in (?). All of Tarski’s original works have been collected
into a four volume series, (?).

52.10 Alan Turing


Alan Turing was born in Mailda Vale, London, on June 23, 1912. He is consid-
ered the father of theoretical computer science. Turing’s interest in the phys-
ical sciences and mathematics started at a young age. However, as a boy his
interests were not represented well in his schools, where emphasis was placed
on literature and classics. Consequently, he did poorly in school and was rep-
rimanded by many of his teachers.
Turing attended King’s College, Cambridge as an undergraduate, where
he studied mathematics. In 1936 Turing developed (what is now called) the
Turing machine as an attempt to precisely define the notion of a computable
function and to prove the undecidability of the decision problem. He was
beaten to the result by Alonzo Church, who proved the result via his own
lambda calculus. Turing’s paper was still published with reference to Church’s
result. Church invited Turing to Princeton, where he spent 1936–1938, and ob-
tained a doctorate under Church.
Despite his interest in logic, Turing’s earlier interests in physical sciences
remained prevalent. His practical skills were put to work during his ser-
vice with the British cryptanalytic department at Bletchley Park during World
War II. Turing was a central figure in cracking the cypher used by German
Naval communications—the Enigma code. Turing’s expertise in statistics and
cryptography, together with the introduction of electronic machinery, gave
the team the ability to crack the code by creating a de-crypting machine called
a “bombe.” His ideas also helped in the creation of the world’s first pro-
grammable electronic computer, the Colossus, also used at Bletchley park to
break the German Lorenz cypher.
Turing was gay. Nevertheless, in 1942 he proposed to Joan Clarke, one
of his teammates at Bletchley Park, but later broke off the engagement and

Release: (None) ((None)) 689


CHAPTER 52. BIOGRAPHIES

confessed to her that he was homosexual. He had several lovers throughout


his lifetime, although homosexual acts were then criminal offences in the UK.
In 1952, Turing’s house was burgled by a friend of his lover at the time, and
when filing a police report, Turing admitted to having a homosexual relation-
ship, under the impression that the government was on their way to legalizing
homosexual acts. This was not true, and he was charged with gross indecency.
Instead of going to prison, Turing opted for a hormone treatment that reduced
libido. Turing was found dead on June 8, 1954, of a cyanide overdose—most
likely suicide. He was given a royal pardon by Queen Elizabeth II in 2013.

Further Reading For a comprehensive biography of Alan Turing, see ?. Tur-


ing’s life and work inspired a play, Breaking the Code, which was produced in
1996 for TV starring Derek Jacobi as Turing. The Imitation Game, an Academy
Award nominated film starring Bendict Cumberbatch and Kiera Knightley, is
also loosely based on Alan Turing’s life and time at Bletchley Park (?).
? has several podcasts on Turing’s life and work. BBC Horizon’s docu-
mentary The Strange Life and Death of Dr. Turing is available to watch online
(?). (?) is a short video of a working LEGO Turing Machine—made to honour
Turing’s centenary in 2012.
Turing’s original paper on Turing machines and the decision problem is ?.

52.11 Ernst Zermelo


Ernst Zermelo was born on July 27, 1871 in Berlin, Germany. He had five
sisters, though his family suffered from poor health and only three survived
to adulthood. His parents also passed away when he was young, leaving
him and his siblings orphans when he was seventeen. Zermelo had a deep
interest in the arts, and especially in poetry. He was known for being sharp,
witty, and critical. His most celebrated mathematical achievements include
the introduction of the axiom of choice (in 1904), and his axiomatization of set
theory (in 1908).
Zermelo’s interests at university were varied. He took courses in physics,
mathematics, and philosophy. Under the supervision of Hermann Schwarz,
Zermelo completed his dissertation Investigations in the Calculus of Variations
in 1894 at the University of Berlin. In 1897, he decided to pursue more studies
at the University of Göttigen, where he was heavily influenced by the foun-
dational work of David Hilbert. In 1899 he became eligible for professorship,
but did not get one until eleven years later—possibly due to his strange de-
meanour and “nervous haste.”
Zermelo finally received a paid professorship at the University of Zurich
in 1910, but was forced to retire in 1916 due to tuberculosis. After his recov-
ery, he was given an honourary professorship at the University of Freiburg in
1921. During this time he worked on foundational mathematics. He became

690 Release: (None) ((None))


irritated with the works of Thoralf Skolem and Kurt Gödel, and publicly crit-
icized their approaches in his papers. He was dismissed from his position at
Freiburg in 1935, due to his unpopularity and his opposition to Hitler’s rise to
power in Germany.
The later years of Zermelo’s life were marked by isolation. After his dis-
missal in 1935, he abandoned mathematics. He moved to the country where
he lived modestly. He married in 1944, and became completely dependent on
his wife as he was going blind. Zermelo lost his sight completely by 1951. He
passed away in Günterstal, Germany, on May 21, 1953.

Further Reading For a full biography of Zermelo, see ?. Zermelo’s semi-


nal 1904 and 1908 papers are available to read in the original German (??).
Zermelo’s collected works, including his writing on physics, are available in
English translation in (??).

Photo Credits

691

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy