Intuitionistic Logic
Intuitionistic Logic
Intuitionistic Logic
1
This is a brief introduction to intuitionistic logic produced by Zesen
Qian and revised by RZ. It is not yet well integrated with the rest of the
text and needs examples and motivations.
Introduction
3
√
√ 2 √
which is rational. So, in this case, let a be 2 , and let b be 2.
Does this constitute a valid proof? Most mathematicians feel that it does.
But again, there is something a little bit unsatisfying here: we have proved
the existence of a pair of real numbers with a certain property, without being
able to say which pair of numbers it is. It is possible to prove the same√result,
but in such a way that the pair a, b is given in the proof: take a = 3 and
b = log3 4. Then
√ log3 4
ab = 3 = 31/2·log3 4 = (3log3 4 )1/2 = 41/2 = 2,
since 3log3 x = x.
Intuitionistic logic is designed to capture a kind of reasoning where moves
like the one in the first proof are disallowed. Proving the existence of an x
satisfying ϕ(x) means that you have to give a specific x, and a proof that it
satisfies ϕ, like in the second proof. Proving that ϕ or ψ holds requires that
you can prove one or the other.
Formally speaking, intuitionistic logic is what you get if you restrict a proof
system for classical logic in a certain way. From the mathematical point of
view, these are just formal deductive systems, but, as already noted, they are
intended to capture a kind of mathematical reasoning. One can take this to
be the kind of reasoning that is justified on a certain philosophical view of
mathematics (such as Brouwer’s intuitionism); one can take it to be a kind
of mathematical reasoning which is more “concrete” and satisfying (along the
lines of Bishop’s constructivism); and one can argue about whether or not
the formal description captures the informal motivation. But whatever philo-
sophical positions we may hold, we can study intuitionistic logic as a formally
presented logic; and for whatever reasons, many mathematical logicians find it
interesting to do so.
1. ⊥ is an atomic formula.
1. ¬ϕ abbreviates ϕ → ⊥.
2. ϕ ↔ ψ abbreviates (ϕ → ψ) ∧ (ψ → ϕ).
Example 1.4. Let us prove ϕ→¬¬ϕ for any proposition ϕ, which is ϕ→((ϕ→
⊥)→⊥). The construction should be a function f that, given a construction M
of ϕ, returns a construction f (M ) of (ϕ→⊥)→⊥. Here is how f constructs the
construction of (ϕ → ⊥) → ⊥: We have to define a function g which, when given
a construction h of ϕ → ⊥ as input, outputs a construction of ⊥. We can define
g as follows: apply the input h to the construction M of ϕ (that we received
earlier). Since the output h(M ) of h is a construction of ⊥, f (M )(h) = h(M )
is a construction of ⊥ if M is a construction of ϕ.
p1 (hN1 , N2 i) = N1
p2 (hN1 , N2 i) = N2
Here is what f does: First it applies p1 to its input M . That yields a construc-
tion of ϕ. Then it applies p2 to M , yielding a construction of ϕ → ⊥. Such a
construction, in turn, is a function p2 (M ) which, if given as input a construction
of ϕ, yields a construction of ⊥. In other words, if we apply p2 (M ) to p1 (M ),
we get a construction of ⊥. Thus, we can define f (M ) = p2 (M )(p1 (M )).
Conjunction
ϕ∧ψ
ϕ ∧Elim
ϕ ψ
∧Intro
ϕ∧ψ ϕ∧ψ
∧Elim
ψ
Conditional
[ϕ]u
ϕ→ψ ϕ
→Elim
ψ
ψ
u →Intro
ϕ→ψ
Disjunction
ϕ [ϕ]n [ψ]n
∨Intro
ϕ∨ψ
ψ
∨Intro ϕ∨ψ χ χ
ϕ∨ψ n ∨Elim
χ
Absurdity
⊥ ⊥
ϕ I
Rules for ¬
Since ¬ϕ is defined as ϕ → ⊥, we strictly speaking do not need rules for ¬. But
if we did, this is what they’d look like:
[ϕ]n
¬ϕ ϕ
¬Elim
⊥
⊥
n
¬ϕ ¬Intro
[ϕ]2 [ϕ → ⊥]1
→Elim
⊥
1 →Intro
(ϕ → ⊥) → ⊥
2 →Intro
ϕ → (ϕ → ⊥) → ⊥
2. ` ((ϕ ∧ ψ) → χ) → (ϕ → (ψ → χ))
[ϕ]2 [ψ]1
∧Intro
[(ϕ ∧ ψ) → χ]3 ϕ∧ψ
χ →Elim
1 →Intro
ψ→χ
2 →Intro
ϕ → (ψ → χ)
3 →Intro
((ϕ ∧ ψ) → χ) → (ϕ → (ψ → χ))
[ϕ ∧ (ϕ → ⊥)]1 [ϕ ∧ (ϕ → ⊥)]1
∧Elim ∧Elim
ϕ→⊥ ϕ
→Elim
⊥
1 →Intro
(ϕ ∧ (ϕ → ⊥)) → ⊥
[ϕ]1
∨Intro
[(ϕ ∨ (ϕ → ⊥)) → ⊥]2 ϕ ∨ (ϕ → ⊥)
→Elim
⊥
1 →Intro
ϕ→⊥
∨Intro
[(ϕ ∨ (ϕ → ⊥)) → ⊥]2 ϕ ∨ (ϕ → ⊥)
→Elim
⊥
2 →Intro
((ϕ ∨ (ϕ → ⊥)) → ⊥) → ⊥
Proof. Every natural deduction rule is also a rule in classical natural deduction,
so every derivation in intuitionistic logic is also a derivation in classical logic.
Definition 1.10 (Axioms). The set of Ax0 of axioms for the intuitionistic
propositional logic are all formulas of the following forms:
int:int:axd: (ϕ ∧ ψ) → ϕ (1.1)
ax:land1
int:int:axd: (ϕ ∧ ψ) → ψ (1.2)
ax:land2
int:int:axd: ϕ → (ψ → (ϕ ∧ ψ)) (1.3)
ax:land3
int:int:axd: ϕ → (ϕ ∨ ψ) (1.4)
ax:lor1
int:int:axd: ϕ → (ψ ∨ ϕ) (1.5)
ax:lor2
int:int:axd: (ϕ → χ) → ((ψ → χ) → ((ϕ ∨ ψ) → χ)) (1.6)
ax:lor3
int:int:axd: ϕ → (ψ → ϕ) (1.7)
ax:lif1
int:int:axd: (ϕ → (ψ → χ)) → ((ϕ → ψ) → (ϕ → χ)) (1.8)
ax:lif2
int:int:axd: ⊥→ϕ (1.9)
ax:lfalse1
Semantics
2.1 Introduction
int:sem:int: No logic is satisfactorily described without a semantics, and intuitionistic logic
sec
is no exception. Whereas for classical logic, the semantics based on valuations is
canonical, there are several competing semantics for intuitionistic logic. None of
them are completely satisfactory in the sense that they give an intuitionistically
acceptable account of the meanings of the connectives.
The semantics based on relational models, similar to the semantics for
modal logics, is perhaps the most popular one. In this semantics, proposi-
tional variables are assigned to worlds, and these worlds are related by an
accessibility relation. That relation is always a partial order, i.e., it is reflexive,
antisymmetric, and transitive.
Intuitively, you might think of these worlds as states of knowledge or “evi-
dentiary situations.” A state w0 is accessible from w iff, for all we know, w0 is
a possible (future) state of knowledge, i.e., one that is compatible with what’s
known at w. Once a proposition is known, it can’t become un-known, i.e.,
whenever ϕ is known at w and Rww0 , ϕ is known at w0 as well. So “knowl-
edge” is monotonic with respect to the accessibility relation.
If we define “ϕ is known” as in epistemic logic as “true in all epistemic
alternatives,” then ϕ ∧ ψ is known at w if in all epistemic alternatives, both ϕ
and ψ are known. But since knowledge is monotonic and R is reflexive, that
means that ϕ ∧ ψ is known at w iff ϕ and ψ are known at w. For the same
reason, ϕ ∨ ψ is known at w iff at least one of them is known. So for ∧ and ∨,
the truth conditions of the connectives coincide with those in classical logic.
13
The truth conditions for the conditional, however, differ from classical logic.
ϕ → ψ is known at w iff at no w0 with Rww0 , ϕ is known without ψ also being
known. This is not the same as the condition that ϕ is unknown or ψ is known
at w. For if we know neither ϕ nor ψ at w, there might be a future epistemic
state w0 with Rww0 such that at w0 , ϕ is known without also coming to know ψ.
We know ¬ϕ only if there is no possible future epistemic state in which
we know ϕ. Here the idea is that if ϕ were knowable, then in some possible
future epistemic state ϕ becomes known. Since we can’t know ⊥, in that future
epistemic state, we would know ϕ but not know ⊥.
On this interpretation the principle of excluded middle fails. For there are
some ϕ which we don’t yet know, but which we might come to know. For such
an ϕ, both ϕ and ¬ϕ are unknown, so ϕ ∨ ¬ϕ is not known. But we do know,
e.g., that ¬(ϕ ∧ ¬ϕ). For no future state in which we know both ϕ and ¬ϕ is
possible, and we know this independently of whether or not we know ϕ or ¬ϕ.
Relational models are not the only available semantics for intuitionistic
logic. The topological semantics is another: here propositions are interpreted
as open sets in a topological space, and the connectives are interpreted as
operations on these sets (e.g., ∧ corresponds to intersection).
Proof. Exercise.
Ww = {u ∈ W : Rwu},
Rw = R ∩ (Ww )2 , and
Vw (p) = V (p) ∩ Ww .
Open sets are closed under arbitrary unions: if Ui ∈ O for all i ∈ I, then
3. S
{Ui : i ∈ I} ∈ O.
We may write X for a topology if the collection of open sets can be inferred
from the context; note that, still, only after X is endowed with open sets can
it be called a topology.
1. [⊥]]X = ∅
2. [p]]X = V (p)
Here, Int(V ) is the function that maps a set V ⊆ X to its interior, that is, the
union of all open sets it contains. In other words,
[
Int(V ) = {U : U ⊆ V and U ∈ O}.
The soundness proof relies on the fact that all axioms are intuitionisti-
cally valid; this still needs to be proved, e.g., in the Semantics chapter.
18
Suppose M, w
Γ . Since M, w
Γ and Γ ϕi → ϕn , M, w
ϕi → ϕn .
By definition, this means that for all w0 such that Rww0 , if M, w0
ϕi
then M, w0
ϕn . Since R is reflexive, w is among the w0 such that Rww0 ,
i.e., we have that if M, w
ϕi then M, w
ϕn . Since Γ ϕi , M, w
ϕi .
So, M, w
ϕn , as we wanted to show.
Problem 3.1. Complete the proof of Theorem 3.2. For the cases for ¬Intro
and ¬Elim, use the definition of M, w
¬ϕ in Definition 2.2, i.e., don’t treat
¬ϕ as defined by ϕ → ⊥.
Problem 3.2. Show that the following formulas are not derivable in intuition-
istic logic:
1. (ϕ → ψ) ∨ (ψ → ϕ)
2. (¬¬ϕ → ϕ) → (ϕ ∨ ¬ϕ)
3. (ϕ → ψ ∨ χ) → (ϕ → ψ) ∨ (ϕ → χ)
2. ∆(σ.n) = (
(∆(σ) ∪ {ψn })∗ if ∆(σ) ∪ {ψn } 0 χn
∆(σ) otherwise
Here by (∆(σ) ∪ {ψn })∗ we mean the prime set of formulas which exists by
Lemma 3.4 applied to the set ∆(σ) ∪ {ψn } and the formula χn . Note that by
this definition, if ∆(σ) ∪ {ψn } 0 χn , then ∆(σ.n) ` ψn and ∆(σ.n) 0 χn . Note
also that ∆(σ) ⊆ ∆(σ.n) for any n. If ∆ is prime, then ∆(σ) is prime for all σ.
int:sc:mod: Definition 3.5. Suppose ∆ is prime. Then the canonical model M(∆) for ∆
defn:canonical-model
is defined by:
3. V (p) = {σ : p ∈ ∆(σ)}.
Proof. By induction on ϕ.
3. ϕ ≡ ¬ψ: exercise.
Problem 3.6. Show that if M is a relational model using a linear order then
M
(ϕ → ψ) ∨ (ψ → ϕ).
3.7 Decidability
Observe that the proof of the completeness theorem gives us for every Γ 0 ϕ a int:sc:dec:
sec
model with an infinite number of worlds witnessing the fact that Γ 2 ϕ. The
following proposition shows that to prove ϕ it is enough to prove that M
ϕ
for all finite models (i.e., models with a finite set of worlds).
M, w ϕ iff M0 , [w] ϕ
for all formulas ϕ with only propositional variables from P . This is left as an
exercise for the reader.
Problem 3.7. Finish the proof of Theorem 3.8 by showing that M, w
ϕ iff
M0 , [w]
ϕ for all formulas ϕ with only propositional variables from P .
Propositions as Types
4.1 Introduction
Historically the lambda calculus and intuitionistic logic were developed sepa- int:pty:int:
sec
rately. Haskell Curry and William Howard independently discovered a close
similarity: types in a typed lambda calculus correspond to formulas in intu-
itionistic logic in such a way that a derivation of a formula corresponds directly
to a typed lambda term with that formula as its type. Moreover, beta reduc-
tion in the typed lambda calculus corresponds to certain transformations of
derivations.
For instance, a derivation of ϕ→ψ corresponds to a term λxϕ . N ψ , which has
the function type ϕ → ψ. The inference rules of natural deduction correspond
to typing rules in the typed lambda calculus, e.g.,
[ϕ]x
ψ x:ϕ ⇒ N :ψ
x →Intro λ
ϕ→ψ corresponds to ⇒ λxϕ . N ψ : ϕ → ψ
where the rule on the right means that if x is of type ϕ and N is of type ψ,
then λxϕ . N is of type ϕ → ψ.
The →Elim rule corresponds to the typing rule for composition terms, i.e.,
26
ϕ→ψ ϕ
→Elim
ψ corresponds to
⇒ P :ϕ→ψ ⇒ Q:ϕ
app
⇒ P ϕ→ψ Qϕ : ψ
If a →Intro rule is followed immediately by a →Elim rule, the derivation
can be simplified:
[ϕ]x
ϕ
→
−
ψ
x →Intro
ϕ→ψ ϕ
→Elim
ψ ψ
which corresponds to the beta reduction of lambda terms
(λxϕ . P ψ )Q →
− P [Q/x].
Similar correspondences hold between the rules for ∧ and “product” types,
and between the rules for ∨ and “sum” types.
This correspondence between terms in the simply typed lambda calculus
and natural deduction derivations is called the “Curry-Howard”, or “proposi-
tions as types” correspondence. In addition to formulas (propositions) corre-
sponding to types, and proofs to terms, we can summarize the correspondences
as follows:
logic program
proposition type
proof term
assumption variable
discharged assumption bind variable
not discharged assumption free variable
implication function type
conjunction product type
disjunction sum type
absurdity bottom type
Γ ⇒ ϕ∧ψ Γ ⇒ ϕ∧ψ
∧Elim ∧Elim
Γ ⇒ ϕ Γ ⇒ ψ
The label ∧Elim hints at the relation with the rule of the same name in natural
deduction.
Likewise, suppose we have Γ, ϕ ⇒ ψ, meaning we have a derivation with
undischarged assumptions Γ, ϕ and end-formula ψ. If we apply the →Intro
rule, we have a derivation with Γ as undischarged assumptions and ϕ → ψ as
the end-formula, i.e., Γ ⇒ ϕ → ψ. Note how this has made the discharge of
assumptions more explicit.
Γ, ϕ ⇒ ψ
→Intro
Γ ⇒ ϕ→ψ
We can draw conclusions from other rules in the same fashion, which is
spelled out as follows:
Γ ⇒ ϕ ∆ ⇒ ψ
∧Intro
Γ, ∆ ⇒ ϕ ∧ ψ
Γ ⇒ ϕ∧ψ Γ ⇒ ϕ∧ψ
∧Elim1 ∧Elim2
Γ ⇒ ϕ Γ ⇒ ψ
Γ ⇒ ϕ Γ ⇒ ψ
∨Intro1 ∨Intro2
Γ ⇒ ϕ∨ψ Γ ⇒ ϕ∨ψ
Γ ⇒ ϕ∨ψ ∆, ϕ ⇒ χ ∆0 , ψ ⇒ χ
0 ∨Elim
Γ, ∆, ∆ ⇒ χ
Γ, ϕ ⇒ ψ ∆ ⇒ ϕ→ψ Γ ⇒ ϕ
→Intro →Elim
Γ ⇒ ϕ→ψ Γ, ∆ ⇒ ψ
Γ ⇒ ⊥ ⊥
I
Γ ⇒ ϕ
ϕ ⇒ ϕ
Together, these rules can be taken as a calculus about what natural de-
duction derivations exist. They can also be taken as a notational variant of
natural deduction, in which each step records not only the formula derived but
also the undischarged assumptions from which it was derived.
Definition 4.1 (Proof terms). Proof terms are inductively generated by the
following rules:
Since in general terms only make sense with specific contexts, we will speak
simply of “terms” from now on instead of “typing pair”; and it will be apparent
when we are talking about the literal term M .
1. Assumptions discharged in the same step (that is, with the same number
on the square bracket) must be assigned the same variable.
ϕ into x : ϕ.
With assumptions all associated with variables (which are terms), we can now
inductively translate the rest of the deduction tree. The modified natural
deduction rules taking into account context and proof terms are given below.
Given the proof terms for the premise(s), we obtain the corresponding proof
term for conclusion.
M1 : ϕ1 M2 : ϕ2
∧Intro
hM1 , M2 i : ϕ1 ∧ ϕ2
M : ϕ1 ∧ ϕ2 M : ϕ1 ∧ ϕ2
∧Elim1 ∧Elim2
pi (M ) : ϕ1 pi (M ) : ϕ2
[x : ϕ]
P :ϕ→ψ Q:ϕ
→Elim
PQ : ψ
N :ψ
→Intro
λxϕ . N : ϕ → ψ
M1 : ϕ 1 M2 : ϕ2
∨Intro1 ∨Intro2
inϕ
1
1
(M 1 ) : ϕ1 ∨ ϕ2 in ϕ2
2 (M 2 ) : ϕ1 ∨ ϕ2
[x1 : ϕ1 ] [x2 : ϕ2 ]
M : A1 ∨ ϕ2 N1 : χ N2 : χ
∨Elim
case(M, x1 .N1 , x2 .N2 ) : χ
The term case(M, x1 .N1 , x2 .N2 ) mimics the case clause in programming
languages: we already have the derivation of ϕ∨ψ, a derivation of χ assuming ϕ,
and a derivation of χ assuming ψ. The case operator thus select the appropriate
proof depending on M ; either way it’s a proof of χ.
N :⊥ ⊥I
contrϕ (N ) : ϕ
If we leave out the last →Intro, the assumptions denoted by y would be in the
context and we would get:
Another example: ` ϕ → (ϕ → ⊥) → ⊥
[x : ϕ]2 [y : ϕ → ⊥]1
yx : ⊥
1
λy ϕ→⊥ . yx : (ϕ → ⊥) → ⊥
2
λxϕ . λy ϕ→⊥ . yx : ϕ → (ϕ → ⊥) → ⊥
Again all assumptions are discharged and thus the context is empty, the re-
sulting term is
` λxϕ . λy ϕ→⊥ . yx : ϕ → (ϕ → ⊥) → ⊥
If we leave out the last two →Intro inferences, the assumptions denoted by
both x and y would be in context and we would get
x : ϕ, y : ϕ → ⊥ ` yx : ⊥
For each natural deduction rule, the term in the conclusion is always formed
by wrapping some operator around the terms assigned to the premise(s). Rules
[x]1
∨Intro1
[y :]2 inϕ→⊥
1 (x) :
→Elim
y(inϕ→⊥
1 (x)) :
1 →Intro
λxϕ . y(inϕ→⊥
1 (x)) :
∨Intro2
[y :]2 inϕ ϕ ϕ→⊥
2 (λx . yin1 (x)) :
→Elim
y(inϕ ϕ ϕ→⊥
2 (λx . yin1 (x))) :
2 →Intro
λy (ϕ∨(ϕ→⊥))→⊥ . y(inϕ ϕ ϕ→⊥
2 (λx . y(in1 (x)))) :
Our next step is to recover the formulas these terms witness. We define a
function F (Γ, M ) which denotes the formula witnessed by M in context Γ , by
induction on M as follows:
F (Γ, x) = Γ (x)
F (Γ, hN1 , N2 i = F (Γ, N1 ) ∧ F (Γ, N2 )
F (Γ, pi (N ) = ϕi if F (Γ, N ) = ϕ1 ∧ ϕ2
(
ϕ F (N ) ∨ ϕ if i = 1
F (Γ, ini (N ) =
ϕ ∨ F (N ) if i = 2
F (Γ, case(M, x1 .N1 , x2 .N2 )) = F (Γ ∪ {xi : F (Γ, M )}, Ni )
F (Γ, λxϕ . N ) = ϕ → F (Γ ∪ {x : ϕ}, N )
F (Γ, N M ) = ψ if F (Γ, N ) = ϕ → ψ
Γ ` M1 : ϕ1 ∆ ` M2 : ϕ 2 Γ ` M : ϕ1 ∧ ϕ2
∧Intro ∧Elimi
Γ, ∆ ` hM1 , M2 i : ϕ1 ∧ ϕ2 Γ ` pi (M ) : ϕi
Γ ` M1 : ϕ 1 Γ ` M2 : ϕ2
∨Intro1 ∨Intro2
Γ ` inϕ1
2
(M ) : ϕ1 ∨ ϕ 2 Γ ` inϕ1
2 (M ) : ϕ1 ∨ ϕ2
Γ `M :ϕ∨ψ ∆1 , x1 : ϕ1 ` N1 : χ ∆2 , x2 : ϕ2 ` N2 : χ
∨Elim
Γ, ∆, ∆0 ` case(M, x1 .N1 , x2 .N2 ) : χ
Γ, x : ϕ ` N : ψ Γ `Q:ϕ ∆`P :ϕ→ψ
ϕ →Intro →Elim
Γ ` λx . N : ϕ → ψ Γ, ∆ ` P Q : ψ
Γ `M :⊥
⊥Elim
Γ ` contrϕ (M ) : ϕ
These are the typing rules of the simply typed lambda calculus extended
with product, sum and bottom.
In addition, the F (Γ, M ) is actually a type checking algorithm; it returns
the type of the term with respect to the context, or is undefined if the term is
ill-typed with respect to the context.
4.6 Reduction
int:pty:red:
sec
D1 D2
ϕ1 ϕ2 Di
ϕ1 ∧ ϕ2 ∧Intro
ϕi ∧Elim i →
− ϕi
In the typed lambda calculus, this is the beta reduction rule for the product
type.
Note the type annotation on M1 and M2 : while in the standard term syntax
only λxϕ . N has such notion, we reuse the notation here to remind us of the
formula the term is associated with in the corresponding natural deduction
derivation, to reveal the correspondence between the two kinds of syntax.
In natural deduction, a pair of inferences such as those on the left, i.e., a
pair that is subject to cancelling is called a cut. In the typed lambda calculus
the term on the left of →
− is called a redex, and the term to the right is called the
reductum. Unlike untyped lambda calculus, where only (λx. N )Q is considered
to be redex, in the typed lambda calculus the syntax is extended to terms
involving hN, M i, pi (N ), inϕ
i (N ), case(N, x1 .M1 , x2 .M2 ), and contrN (), with
corresponding redexes.
D
[ϕ1 ]u [ϕ2 ]u
D ϕi
D1 D2
ϕi Di
ϕ1 ∨ ϕ2 ∨Intro χ χ
u
χ ∨Elim →
− χ
(Γ, case(inϕi ϕi ϕ1 χ ϕ2 χ
− (Γ, Niχ [M ϕi /xϕ
i (M ), x1 .N1 , x2 .N2 )) →
i
i ])
This is the beta reduction rule of for sum types. Here, M [N/x] means replacing
all assumptions denoted by variable x in M with N ,
It would be nice if we pass the context Γ to the substitution function so
that it can check if the substitution makes sense. For example, xy[ab/y] does
not make sense under the context {x : ϕ → θ, y : ϕ, a : ψ → χ, b : ψ} since then
we would be substituting a derivation of χ where a derivation of ϕ is expected.
However, as long as our usage of substitution is careful enough to avoid such
errors, we won’t have to worry about such conflicts. Thus we can define it
recursively as we did for untyped lambda calculus as if we are dealing with
untyped terms.
Finally, the reduction of the function type corresponds to removal of a
detour of a →Intro followd by a →Elim.
[ϕ]u
D0
D ϕ
ψ D0
u →Intro D
ϕ→ψ ϕ
→Elim
ψ →
− ψ
Absurdity has only an elimination rule and no introduction rule, thus there
is no such reduction for it.
Note that the above notion of reduction concerns only deductions with a cut
at the end of a derivation. We would of course like to extend it to reduction
of cuts anywhere in a derivation, or reductions of subterms of proof terms
which constitute redexes. Note that, however, the conclusion of the reduction
does not change after reduction, thus we are free to continue applying rules to
both sides of →
− . The resulting pairs of trees constitutes an extended notion of
reduction; it is analogous to compatibility in the untyped lambda calculus.
4.7 Normalization
int:pty:nor: In this section we prove that, via some reduction order, any deduction can
sec
be reduced to a normal deduction, which is called the normalization property.
We will make use of the propositions-as-types correspondence: we show that
every proof term can be reduced to a normal form; normalization for natural
deduction derivations then follows.
Firstly we define some functions that measure the complexity of terms. The
length len(ϕ) of a formulas is defined by
len(p) = 0
len(ϕ ∧ ψ) = len(ϕ) + len(ψ) + 1
len(ϕ ∨ ψ) = len(ϕ) + len(ψ) + 1
len(ϕ → ψ) = len(ϕ) + len(ψ) + 1.
The complexity of a proof term is measured by the most complex redex in it,
and 0 if it is normal:
int:pty:nor: Lemma 4.6. If M [N ϕ /xϕ ] is a redex and M 6≡ x, then one of the following
lem:subst
cases holds:
1. M is itself a redex, or
2. M is of the form pi (x), and N is of the form hP1 , P2 i
3. M is of the form case(i, x1 .P1 , x2 .P2 ), and N is of the form ini (Q)
In the first case, cr(M [N/x]) = cr(M ); in the other cases, cr(M [N/x]) =
len(ϕ)).
which is pi (hP1 , P2 i). Its cut rank is equal to cr(x), which is len(ϕ).
Lemma 4.7. If M contracts to M 0 , and cr(M ) > cr(N ) for all proper redex
sub-terms N of M , then cr(M ) > mr(M 0 ).
3. If M is of the form
case(ini (N ϕi ), xϕ1 χ ϕ2 χ
1 .N1 , x2 .N2 ),
then M 0 ≡ Ni [N/xϕ i 0
i ]. Consider a redex in M . Either there is corre-
sponding redex in Ni with equal cut rank, which is less than cr(M ) by
assumption; or the cut rank equals len(ϕi ), which by definition is less
than cr(case(ini (N ϕi ), xϕ1 χ ϕ2 χ
1 .N1 , x2 .N2 )).
Proof. The second follows from the first. We prove the first by complete in-
duction on m = mr(M ), where M is a proof term.
1. If m = 0, M is already normal.
2. Otherwise, we proceed by induction on n, the number of redexes in M
with cut rank equal to m.
a) If n = 1, select any redex N such that m = cr(N ) > cr(P ) for any
proper sub-term P which is also a redex of course. Such a redex
must exist, since any term only has finitely many subterms.
Let N 0 denote the reductum of N . Now by the lemma mr(N 0 ) <
mr(N ), thus we can see that n, the number of redexes with cr(=)m
is decreased. So m is decreased (by 1 or more), and we can apply
the inductive hypothesis for m.
b) For the induction step, assume n > 1. the process is similar, except
that n is only decreased to a positive number and thus m does not
change. We simply apply the induction hypothesis for n.
Photo Credits
39
Bibliography
40