Computability A4
Computability A4
Library of Congress Catalog Data 1. What can be computed in principle? Introduction and History
ISSN: 1095-5054
2. Turing Machines
Notice: This PDF version was distributed by request to mem- 2.1 Universal Machines
bers of the Friends of the SEP Society and by courtesy to SEP 2.2 The Halting Problem
content contributors. It is solely for their fair use. Unauthorized 2.3 Computable Functions and Enumerability
distribution is prohibited. To learn how to join the Friends of the 2.4 The Unsolvability of the Halting Problem
SEP Society and obtain authorized PDF versions of SEP entries, 3. Primitive Recursive Functions
please visit https://leibniz.stanford.edu/friends/ . 3.1 Recursive Functions
4. Computational Complexity: Functions Computable in Practice
Stanford Encyclopedia of Philosophy
Copyright c 2015 by the publisher 4.1 Significance of Complexity
The Metaphysics Research Lab Bibliography
Center for the Study of Language and Information
Academic Tools
Stanford University, Stanford, CA 94305
Other Internet Resources
Computability and Complexity
Copyright c 2015 by the author Related Entries
Neil Immerman
All rights reserved.
Copyright policy: https://leibniz.stanford.edu/friends/info/copyright/
1
Computability and Complexity Neil Immerman
1. What can be computed in principle? Introduction statements that are true in every appropriate structure are called valid.
and History Those statements that are true in some structure are called satisfiable.
Notice that a formula, φ, is valid iff its negation, ¬φ, is not satisfiable.
In the 1930’s, well before there were computers, various mathematicians
Hilbert called the validity problem for first-order logic, the
from around the world invented precise, independent definitions of what it
entscheidungsproblem. In a textbook, Principles of Mathematical Logic by
means to be computable. Alonzo Church defined the Lambda calculus,
Hilbert and Ackermann, the authors wrote, “The Entscheidungsproblem is
Kurt Gödel defined Recursive functions, Stephen Kleene defined Formal
solved when we know a procedure that allows for any given logical
systems, Markov defined what became known as Markov algorithms, Emil
expression to decide by finitely many operations its validity or
Post and Alan Turing defined abstract machines now known as Post
satisfiability.… The entscheidungsproblem must be considered the main
machines and Turing machines.
problem of mathematical logic.” (Böerger, Grädel, & Gurevich 1997).
Surprisingly, all of these models are exactly equivalent: anything
In his 1930 Ph.D. thesis, Gödel presented a complete axiomatization of
computable in the lambda calculus is computable by a Turing machine and
first-order logic, based on the Principia Mathematica by Whitehead and
similarly for any other pairs of the above computational systems. After
Russell (Gödel 1930). Gödel proved his Completeness Theorem, namely
this was proved, Church expressed the belief that the intuitive notion of
that a formula is provable from the axioms if and only if it is valid.
“computable in principle” is identical to the above precise notions. This
Gödel’s Completeness theorem was a step towards the resolution of
belief, now called the “Church-Turing Thesis”, is uniformly accepted by
Hilbert’s entscheidungsproblem.
mathematicians.
In particular, since the axioms are easily recognizable, and rules of
Part of the impetus for the drive to codify what is computable came from
inference very simple, there is a mechanical procedure that can list out all
the mathematician David Hilbert. Hilbert believed that all of mathematics
proofs. Note that each line in a proof is either an axiom, or follows from
could be precisely axiomatized. He felt that once this was done, there
previous lines by one of the simple rules. For any given string of
would be an “effective procedure”, i.e., an algorithm that would take as
characters, we can tell if it is a proof. Thus we can systematically list all
input any precise mathematical statement, and, after a finite number of
strings of characters of length 1, 2, 3, and so on, and check whether each
steps, decide whether the statement was true or false. Hilbert was asking
of these is a proof. If so, then we can add the proof’s last line to our list of
for what would now be called a decision procedure for all of mathematics.
theorems. In this way, we can list out all theorems, i.e., exactly all the
As a special case of this decision problem, Hilbert considered the validity valid formulas of first-order logic, can be listed out by a simple
problem for first-order logic. First-order logic is a mathematical language mechanical procedure. More precisely, the set of valid formulas is the
in which most mathematical statements can be formulated. Every range of a computable function. In modern terminology we say that the set
statement in first-order logic has a precise meaning in every appropriate of valid formulas of first-order logic is recursively enumerable (r.e.).
logical structure, i.e., it is true or false in each such structure. Those
Gödel’s Completeness theorem was not sufficient, however, to give a established their basic properties.
positive solution to the entscheidungsproblem. Given a formula, φ, if φ is
valid then the above procedure would eventually list it out and thus could He thought clearly and abstractly about what it would mean for a machine
answer, “Yes, φ is valid.” However, if φ were not valid then we might to perform a computational task. Turing defined his machines to consist of
never find this fact out. What was missing was a procedure to list out all the following:
the non-valid formulas, or equivalently to list out all satisfiable formulas.
a finite set, Q, of possible states, because any device must be in one
A year later, in 1931, Gödel shocked the mathematical world by proving of finitely many possible states;
his Incompleteness Theorem: there is no complete and computable a potentially infinite tape, consisting of consecutive cells, σ1 , σ2 , σ3 ,
axiomatization of the first-order theory of the natural numbers. That is, from some finite alphabet, Σ;
there is no reasonable list of axioms from which we can prove exactly all (Σ may be any finite set containing at least two symbols. It is
true statements of number theory (Gödel 1931). convenient to fix Σ = {0, 1, b} consisting of the binary alphabet plus
the blank cell symbol. We usually assume that a finite initial segment
A few years later, Church and Turing independently proved that the of the tape contains binary symbols, and the rest is blank.)
entscheidungsproblem is unsolvable. Church did this by using the methods a read/write tape head, h ≥ 1 , scanning tape cell σh ; and finally,
of Gödel’s Incompleteness Theorem to show that the set of satisfiable a transition function, δ : Q × Σ → Q × Σ × {−1, 0, 1} .
formulas of first-order logic is not r.e., i.e., they cannot be systematically (The meaning of the transition function is that from any given state,
listed out by a function computable by the lambda calculus. Turing q, looking at any given symbol, σh , δ tells us the new state the
introduced his machines and proved many interesting theorems some of machine should enter, the new symbol that should be written in the
which we will discuss in the next section. In particular, he proved the current square, and the new head position, h′ = h + d , where
unsolvability of the halting problem. He obtained the unsolvability of the d ∈ {−1, 0, 1} is the displacement given by δ.)
entscheidungsproblem as a corollary.
The linear nature of its memory tape, as opposed to random access
Hilbert was very disappointed because his program towards a decision memory, is a limitation on computation speed but not power: a Turing
procedure for all of mathematics was proved impossible. However, as we machine can find any memory location, i.e., tape cell, but this may be time
will see in more detail in the rest of this article, a vast amount was learned consuming because it has to move its head step by step along its tape.
about the fundamental nature of computation.
The beauty of Turing machines is that the model is extremely simple, yet
2. Turing Machines nonetheless, extremely powerful. A Turing machine has potentially infinite
work space so that it can process arbitrarily large inputs, e.g., multiply two
In his 1936 paper, “On Computable Numbers, with an Application to the huge numbers, but it can only read or write a bounded amount of
Entscheidungsproblem”, Alan Turing introduced his machines and information, i.e., one symbol, per step. Even before Turing machines and
all the other mathematical models of computation were proved equivalent, 2.2 The Halting Problem
and before any statement of the Church-Turing thesis, Turing argued
convincingly that his machines were as powerful as any possible Because they were designed to embody all possible computations, Turing
computing device. machines have an inescapable flaw: some Turing machines on certain
inputs never halt. Some Turing machines do not halt for silly reasons, for
2.1 Universal Machines example, we can mis-program a Turing machine so that it gets into a tight
loop, for example, in state 17 looking at a 1 it might go to state 17, write a
Each Turing machine can be uniquely described by its transition table: for 1 and displace its head by 0. Slightly less silly, we can reach a blank
each state, q, and each symbol, σ, δ(q, σ) is the new state, the new symbol, symbol, having only blank symbols to the right, and yet keep staying in
and the head displacement. These transition tables, can be written as a the same state, moving one step to the right, and looking for a “1”. Both of
finite string of symbols, giving the complete set of instructions of each those cases of non-halting could be easily detected and repaired by a
Turing machine. Furthermore, these strings of symbols can be listed in decent compiler. However, consider the Turing machine MF , which on
lexicographic order as follows: M1 , M2 , M3 , … , where Mi is the transition input “0”, systematically searches for the first counter-example to Fermat’s
table, i.e., the complete set of instructions, for Turing machine number i. last theorem, and upon finding it outputs the counter-example and halts.
The transition table for Mi is the program for Turing machine i, or more Until Andrew Wiles relatively recently proved Fermat’s Last Theorem, all
simply, the ith program. the mathematicians in the world, working for over three centuries, were
unable to decide whether or not MF on input “0” eventually halts. Now we
Turing showed that he could build a Turing machine, U, that was
know that it never does.
universal, in the sense that it could run the program of any other Turing
machine. More explicitly, for any i, and any input w, U on inputs i and w 2.3 Computable Functions and Enumerability
would do exactly what Mi would do on input w, in symbols,
Since a Turing machine might not halt on certain inputs, we have to be
U(i, w) = Mi(w) careful in how we define functions computable by Turing machines. Let
the natural numbers, N, be the set {0, 1, 2, …} and let us consider Turing
Turing’s construction of a universal machine gives the most fundamental
machines as partial functions from N to N.
insight into computation: one machine can run any program whatsoever.
No matter what computational tasks we may need to perform in the future, Let M be a Turing machine and n a natural number. We say that M’s tape
a single machine can perform them all. This is the insight that makes it contains the number n, if M’s tape begins with a binary representation of
feasible to build and sell computers. One computer can run any program. the number n (with no unnecessary leading 0’s) followed by just blank
We don’t need to buy a new computer every time we have a new problem symbols from there on.
to solve. Of course, in the age of personal computers, this fact is such a
basic assumption that it may be difficult to step back and appreciate it. If we start the Turing machine M on a tape containing n and it eventually
halts with its tape containing m, then we say that M on input n, computes
m : M(n) = m . If, when we start M on input n, it either never halts, or We say that a set, S, is decidable if and only if there is a total Turing
when it halts, its tape does not contain a natural number, e.g., because it machine, M, that decides for all n ∈ N whether or not n ∈ S. Think of “1”
has leading 0’s, or digits interspersed with blank symbols, then we say that as “yes” and “0” as “no”. For all n ∈ N , if n ∈ S , then M(n) = 1 , i.e., M
M(n) is undefined, in symbols: M(n) =↗. We can thus associate with on input n eventually halts and outputs “yes”, whereas if n ∉ S , then
each Turing machine, M , a partial function, M : N → N ∪ {↗} . We say M(n) = 0 , i.e., M on input n eventually halts and outputs “no”. Synonyms
that the function M is total if for all n ∈ N, M(n) ∈ N , i.e., M(n) is for decidable are: computable, solvable, and recursive.
always defined.
For S ⊆ N , the complement of S is N − S, i.e., the set of all natural
Now we can formally define what it means for a set to be recursively numbers not in S. We say that the set S is co-r.e. if and only if its
enumerable (r.e.) which we earlier described informally. Let S ⊆ N . Then complement is r.e. If a set, S, is r.e. and co-r.e. then we can list out all of its
S is r.e. if and only if there is some Turing machine, M, such that S is the elements in one column and we can list out all of its non-elements in a
image of the function computed by M , in symbols, second column. In this way we can decide whether or not a given element,
n, is in S: just scan the two columns and wait for n to show up. If it shows
S = {M(n) ∣ n ∈ N; M(n) ≠↗}. up in the first column then n ∈ S. Otherwise it will show up in the second
column and n ∉ S. In fact, a set is recursive iff it is r.e. and co-r.e.
Thus, S is r.e. just if it can be listed out by some Turing machine. Suppose
that S is r.e. and its elements are enumerated by Turing machine M as
2.4 The Unsolvability of the Halting Problem
above. We can then describe another Turing machine, P, which, on input
n, runs M in a round-robin fashion on all its possible inputs until Turing asked whether every set of natural numbers is decidable. It is easy
eventually M outputs n. If this happens then P halts and outputs “1”, i.e., to see that the answer is, “no”, by the following counting argument. There
P(n) = 1 . If n ∉ S , then M will never output n, so P(n) will never halt, are uncountably many subsets of N, but since there are only countably
i.e., P(n) =↗. many Turing machines, there can be only countably many decidable sets.
Thus almost all sets are undecidable.
Let the notation P(n) ↓ mean that Turing machine P on input n eventually
halts. For a Turing machine, P, define L(P), the set accepted by P, to be Turing actually constructed a non-decidable set. As we will see, he did this
those numbers n such that P on input n eventually halts, using a diagonal argument. The diagonal argument goes back to Georg
Cantor who used it to show that the real numbers are uncountable. Gödel
L(P) = {n ∣ P(n) ↓}.
used a similar diagonal argument in his proof of the Incompleteness
The above argument shows that if a set S is r.e. then it is accepted by some Theorem in which he constructed a sentence, J , in number theory whose
Turing machine, P , i.e., S = L(P). The converse of this statement holds as meaning could be understood to be, “J is not a theorem.”
well. That is, S is r.e. if and only if it is accepted by some Turing machine,
Turing constructed a diagonal halting set, K , as follows:
P.
Here composition is the natural way to combine functions, and primitive Next, define the multiplication function, T(x, y) , as follows:
recursion is a restricted kind of recursion in which h with first argument
T(0, y) = ζ( )
n + 1 is defined in terms of h with first argument n, and all the other
T(n + 1, y) = P(T(n, y), y).
arguments unchanged.
Next, we define the exponential function, E(x, y) . (Usually 00 is
Define the primitive recursive functions to be the smallest class of
considered undefined, but since primitive recursive functions must be
functions that contains the Initial functions and is closed under
total, we define E(0,0) to be 1.) Since primitive recursion only allows us to
Composition and Primitive Recursion. The set of primitive recursive
recurse on the first argument, we use two steps to define the exponential
functions is equal to the set of functions computed using bounded iteration
function:
(Meyer & Ritchie 1967), i.e. the set of functions definable in the language
Bloop from (Hofstadter 1979). R(0, y) = σ(ζ( ))
R(n + 1, y) = T(R(n, y), y).
The primitive recursive functions have a very simple definition and yet
they are extremely powerful. Gödel proved inductively that every Finally we can define E(x, y) = R(η(y), η(x)) by composition. (Recall that
primitive recursive function can be simply represented in first-order η is the identity function so this could be more simply written as
number theory. He then used the primitive recursive functions to encode E(x, y) = R(y, x) .)
formulas and even sequences of formulas by numbers. He finally used the
primitive recursive functions to compute properties of the represented The exponential function, E, grows very rapidly, for example, E(10,10) is
formulas including that a formula was well formed, a sequence of ten billion, and E(50,50) is over 1084 (and thus significantly more than the
formulas was a proof, and that a formula was a theorem. estimated number of atoms in the universe). However, there are much
faster growing primitive recursive functions. As we saw, E was defined
It takes a long series of lemmas to show how powerful the primitive from the slowly growing function, σ, using three applications of primitive
recursive functions are. The following are a few examples showing that recursion: one for addition, one for multiplication, and then one more for
addition, multiplication, and exponentiation are primitive recursive. exponentiation. We can continue to apply primitive recursion to build a
series of unimaginably fast growing functions. Let’s do just one more step
Define the addition function, P(x, y), as follows: in the series defining the hyper-exponential function, H(n, m) as 2 to the 2
P(0, y) = η(y) to the 2 to the … to the m, with a tower of n 2s. H is primitive recursive
P(n + 1, y) = σ(P(n, y)) because it can be defined from E using one more application of primitive
recursion:
(Note that this fits into the definition of primitive recursion because the
function g(x1 , x2 , x3 ) = η(σ(x1 )) is definable from the initial functions η H(0, y) = y
and σ by composition.) H(n + 1, y) = E(2, H(n, y))
H(2, 2) = 4
= 16, H(3, 3) = 256 77
Thus H(2, 2) = 24 = 16, H(3, 3) = 2256 is more than 1077 and which is a contradiction. Therefore, D is not primitive recursive.
comparable to the number of atoms in the universe. If that’s not big
enough for you then consider H(4, 4) . To write this number in decimal Alas, the above diagonal argument works on any class of total functions
notation we would need a one, followed by more zero’s than the number that could be considered a candidate for the class of all computable
of particles in the universe. functions. The only way around this, if we want all functions computable
in principle, not just in practice, is to add some kind of unbounded search
3.1 Recursive Functions operation. This is what Gödel did to extend the primitive recursive
functions to the recursive functions.
The set of primitive recursive functions is a huge class of computable
functions. In fact, they can be characterized as the set of functions Define the unbounded minimization operator, μ, as follows. Let f be a
computable in time that is some primitive recursive function of n, where n perhaps partial function of arity k + 1 . Then μ[f ] is defined as the
is the length of the input. For example, since H(n, n) is a primitive following function of arity k. On input x1 , … , xk do the following:
recursive function, the primitive recursive functions include all of TIME[
For i = 0 to ∞ do {
H(n, n) ]. (See the next section for a discussion of computational
complexity, including TIME.) Thus, the primitive recursive functions if f (i, x1 , … , xk ) = 1 , then output i
include all functions that are feasibly computable by any conceivable
measure of feasible, and much beyond that. }
However, the primitive recursive functions do not include all functions Thus if f (i, x1 , … , xk ) = 1 , and for all j < i, f (j, x1 , … , xk ) is defined, but
computable in principle. To see this, we can again use diagonalization. We not equal to 1, then μ[f ](x1 , … , xk ) = i . Otherwise μ[f ](x1 , … , xk ) is
can systematically encode all definitions of primitive recursive functions undefined.
of arity 1, calling them p1 , p2 , p3 , and so on.
Gödel defined the set of Recursive functions to be the closure of the initial
We can then build a Turing machine to compute the value of the following primitive recursive functions under composition, primitive recursion, and
diagonal function, D(n) = pn (n) + 1. μ . With this definition, the Recursive functions are exactly the same as the
set of partial functions computable by the Lambda calculus, by Kleene
Notice that D is a total, computable function from N to N, but it is not Formal systems, by Markov algorithms, by Post machines, and by Turing
primitive recursive. Why? Suppose for the sake of a contradiction that D machines.
were primitive recursive. Then D would be equal to pd for some d ∈ N .
But it would then follow that 4. Computational Complexity: Functions
pd (d) = pd (d) + 1, Computable in Practice
During World War II, Turing helped design and build a specialized Alan Cobham and Jack Edmonds identified the complexity class, P , of
computing device called the Bombe at Bletchley Park. He used the Bombe problems recognizable in some polynomial amount of time, as being an
to crack the German “Enigma” code, greatly aiding the Allied cause excellent mathematical wrapper of the class of feasible problems—those
[Hodges, 1992]. By the 1960’s computers were widely available in problems all of whose moderately-sized instances can be feasibly
industry and at universities. As algorithms were developed to solve myriad recognized,
problems, some mathematicians and scientists began to classify algorithms
according to their efficiency and to search for best algorithms for certain P = ⋃ TIME[ni ]
i=1,2,…
problems. This was the beginning of the modern theory of computation.
Any problem not in P is certainly not feasible. On the other hand, natural
problems that have algorithms in P, tend to eventually have algorithms
In this section we are dealing with complexity instead of computability,
and all the Turing machines that we consider will halt on all their inputs.
discovered for them that are actually feasible.
Rather than accepting by halting, we will assume that a Turing machine
accepts by outputting “1” and rejects by outputting “0”, thus we redefine Many important complexity classes besides P have been defined and
the set accepted by a total machine, M, studied; a few of these are NP, PSPACE, and EXPTIME. PSPACE
exactly t? This problem is easy to solve in nondeterministic linear time: harder problems, e.g. TIME[n] is strictly contained in TIME[n1.01 ] and
for each i, we guess whether or not to take ai . Next we add up all the similarly for SPACE and other measures. However, the trade-offs between
numbers we decided to take and if the sum is equal to t then accept. Thus different computational resources is still quite poorly understood. It is
the nondeterministic time is linear, i.e., some constant times the length of obvious that P is contained in NP. Furthermore, NP is contained in
the input, n. However there is no known (deterministic) way to solve this PSPACE because in PSPACE we can systematically try every single
problem in time less than exponential in n. branch of an NP computation, reusing space for the successive branches,
and accepting if any of these branches lead to acceptance. PSPACE is
There has been a large study of algorithms and the complexity of many contained in EXPTIME because if a PSPACE machine takes more than
important problems is well understood. In particular reductions between exponential time, then it has exactly repeated some configuration so it
problems have been defined and used to compare the relative difficulty of must be in an infinite loop. The following are the known relationships
two problems. Intuitively, we say that A is reducible to B (A ≤ B) if there between the above classes:
is a simple transformation, τ , that maps instances of A to instances of B in
a way that preserves membership, i.e., τ(w) ∈ B ⇔ w ∈ A. P ⊆ NP ⊆ PSPACE ⊆ EXPTIME
Remarkably, a high percentage of naturally occurring computational However, while it seems clear that P is strictly contained in NP, that NP is
problems turn out to be complete for one of the above classes. (A problem, strictly contained in PSPACE, and that PSPACE is strictly contained in
A, is complete for a complexity class C if A is a member of C and all other EXPTIME, none of these inequalities has been proved. In fact, it is not
problems B in C are no harder than A, i.e., B ≤ A . Two complete even known that P is different from PSPACE, nor that NP is different
problems for the same class have equivalent complexity.) from EXPTIME. The only known proper inclusion from the above is that
P is strictly contained in EXPTIME. The remaining questions concerning
The reason for this completeness phenomenon has not been adequately the relative power of different computational resources are fundamental
explained. One plausible explanation is that natural computational unsolved problems in the theory of computation.
problems tend to be universal in the sense of Turing’s universal machine.
A universal problem in a certain complexity class can simulate any other There is an extensive theory of computational complexity. This entry
problem in that class. The reason that the class NP is so well studied is that briefly describes the area, putting it into the context of the question of
a large number of important practical problems are NP complete, what is computable in principle versus in practice. For readers interested
including Subset Sum. None of these problems is known to have an in learning more about complexity, there are excellent books, for example,
algorithm that is faster than exponential time, although some NP-complete [Papadimitriou, 1994] and [Arora and Barak, 2009]. There is also the entry
problems admit feasible approximations to their solutions. on Computational Complexity Theory.
A great deal remains open about computational complexity. We know that 4.1 Significance of Complexity
strictly more of a particular computational resource lets us solve strictly
TIME[ 1.01 ]
The following diagram maps out all the complexity classes we have
discussed and a few more as well. The diagram comes from work in
Descriptive Complexity [Immerman, 1999] which shows that all important
complexity classes have descriptive characterizations. Fagin began this
field by proving that NP = SO∃ , i.e., a property is in NP iff it is expressible
in second-order existential logic [Fagin, 1974].
Vardi and the author of this entry later independently proved that P =
FO(LFP): a property is in P iff it is expressible in first-order logic plus a
least fixed-point operator (LFP) which formalizes the power to define new
relations by induction. A captivating corollary of this is that P = NP iff SO
= FO(LFP). That is, P is equal to NP iff every property expressible in
second order logic is already expressible in first-order logic plus inductive
definitions. (The languages in question are over finite ordered input
structures. See [Immerman, 1999] for details.)
The top right of the diagram shows the recursively enumerable (r.e.)
problems; this includes r.e.-complete problems such as the halting problem
(Halt). On the left is the set of co-r.e. problems including the co-r.e.-
⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯
complete problem Halt -- the set of Turing Machines that never halt on a
given input. We mentioned at the end of Section 2.3 that the intersection of
the set of r.e problems and the set of co-r.e problems is equal to the set of
Recursive problems. The set of Primitive Recursive problems is a strict complexity are more than abstract concepts; they are important at a
subset of the Recursive problems. practical level. We have had remarkable success in proving that our
problem of interest is complete for a well-known complexity class. If the
Moving toward the bottom of the diagram, there is a region marked with a class is contained in P, then we can usually just look up a known efficient
green dotted line labelled “truly feasible”. Note that this is not a algorithm. Otherwise, we must look at simplifications or approximations
mathematically defined class, but rather an intuitive notion of those of our problem which may be feasible.
problems that can be solved exactly, for all the instances of reasonable
size, within a reasonable amount of time, using a computer that we can There is a rich theory of the approximability of NP optimization problems
afford. (Interestingly, as the speed of computers has dramatically increased (See [Arora & Barak, 2009]). For example, the Subset Sum problem
over the years, our expectation of how large an instance we should be able mentioned above is an NP-complete problem. Most likely it requires
to handle has increased accordingly. Thus, the boundary of what is “truly exponential time to tell whether a given Subset Sum problem has an exact
feasible” changes more slowly than the increase of computer speed might solution. However, if we only want to see if we can reach the target up to a
suggest.) fixed number of digits of accuracy, then the problem is quite easy, i.e.,
Subset Sum is hard, but very easy to approximate.
As mentioned before, P is a good mathematical wrapper for the set of
feasible problems. There are problems in P requiring n1,000 time for Even the r.e.-complete Halting problem has many important feasible
problems of size n and thus not feasible. Nature appears to be our friend subproblems. Given a program, it is in general not possible to figure out
here, which is to say naturally occurring problems in P favor relatively what it does and whether or not it eventually halts. However, most
simple algorithms, and “natural” problems tend to be feasible. The number programs written by programmers or students can be automatically
of steps required for problems of size n tends to be less than cnk with analyzed, optimized and even corrected by modern compilers and model
small multiplicative constants c, and very small exponents, k, i.e., k ≤ 2 . checkers.
In practice the asympototic complexity of naturally occurring problems The class NP is very important practically and philosophically. It is the
tends to be the key issue determining whether or not they are feasible. A class of problems, S, such that any input w is in S iff there is a proof, p(w),
problem with complexity 17n can be handled in under a minute on modern that w ∈ S and p(w) is not much larger than w. Thus, very informally, we
computers, for every instance of size a billion. On the other hand, a can think of NP has the set of intellectual endeavors that may be in reach:
problem with worst-case complexity 2n cannot be handled in our lifetimes if we find the answer to whether w ∈ S , we can convince others that we
for some instance of size a hundred. have done so.
Remarkably, natural problems tend to be complete for important The boolean satisfiability problem, SAT, was the first problem proved NP
complexity classes, namely the ones in the diagram and only a very few complete [Cook, 1971], i.e., it is a hardest NP problem. The fact that SAT
others. This fascinating phenomenon means that algorithms and is NP complete means that all problems in NP are reducible to SAT. Over
the years, researchers have built very efficient SAT solvers which can Mathematics, 17: 449–467.
quickly solve many SAT instances -- i.e., find a satisfying assignment or Enderton, Herbert B., 1972, A Mathematical Introduction to Logic, New
prove that there is none -- even for instances with millions of variables. York: Academic Press.
Thus, SAT solvers are being used as general purpose problem solvers. On Fagin, Ronald, 1974, “Generalized First-Order Spectra and Polynomial-
the other hand, there are known classes of small instances for which Time Recognizable Sets,” in Complexity of Computation, R.
current SAT solvers fail. Thus part of the P versus NP question concerns Karp(ed.), SIAM-AMS Proc, 7: 27–41.
the practical and theoretical complexity of SAT [Nordström, 2015]. Garey, Michael and David S. Johnson, 1979, Computers and
Intractability, New York: Freeman.
Bibliography Gödel, Kurt, 1930, “The Completeness of the Axioms of the Functional
Calculus,” in van Heijenoort 1967, 582–591.
Arora, Sanjeev and Boaz Barak, 2009, Computational Complexity: A –––, 1931, “On Formally Undecidable Propositions of Principia
Modern Approach, New York: Cambridge University Press. Mathematica and Related Systems I,” in van Heijenoort, 1967, 592–
Church, Alonzo, 1933, “A Set of Postulates for the Foundation of Logic 617.
(Second Paper)”, Annals of Mathematics (Second Series), 33: 839– Hartmanis, Juris, 1989, “Overview of computational Complexity Theory”
864. in J. Hartmanis (ed.), Computational Complexity Theory, Providence:
–––, 1936, “An Unsolvable Problem of Elementary Number Theory,” American Mathematical Society, 1–17.
American Journal of Mathematics, 58: 345–363.. Hilbert and Ackermann, 1928/1938, Grundzüge der theoretischen Logik,
–––, 1936, “A Note on the Entscheidungsproblem,” Journal of Symbolic Springer. English translation of the 2nd edition: Principles of
Logic, 1: 40–41; correction 1, 101–102. Mathematical Logic, New York: Chelsea Publishing Company, 1950.
Böerger, Egon, Erich Grädel, and Yuri Gurevich, 1997, The Classical Hodges, Andrew, 1992, Alan Turing: the Enigma, London: Random
Decision Problem, Heidelberg: Springer. House.
Cobham, Alan, 1964, “The Intrinsic Computational Difficulty of Hofstadter, Douglas, 1979, Gödel, Escher, Bach: an Eternal Golden Braid,
Functions,” Proceedings of the 1964 Congress for Logic, New York: Basic Books.
Mathematics, and Philosophy of Science, Amsterdam: North-Holland Hopcroft, John E., 1984, “Turing Machines”, Scientific American, 250(5):
24–30. 70–80.
Cook, Stephen, 1971, “The Complexity of Theorem Proving Procedures,” Immerman, Neil, 1999, Descriptive Complexity, New York: Springer.
Proceedings of the Third Annual ACM STOC Symposium, Shaker Karp, Richard, 1972, “Reducibility Among Combinatorial Problems”, in
Heights, Ohio, 151–158. Complexity of Computations, R.E. Miller and J.W. Thatcher (eds.),
Davis, Martin, 2000,The Universal Computer: the Road from Leibniz to New York: Plenum Press, 85–104.
Turing, New York: W. W. Norton & Company. Kleene, Stephen C., 1935, “A Theory of Positive Integers in Formal
Edmonds, Jack, 1965, “Paths, Trees and Flowers,” Canadian Journal of Logic,” American Journal of Mathematics, 57: 153–173, 219–244.
–––, 1950, Introduction to Metamathematics, Princeton: Van Nostrand. How to cite this entry.
Levin, Leonid, 1973, “Universal search problems,” Problemy Peredachi Preview the PDF version of this entry at the Friends of the SEP
Informatsii, 9(3): 265–266; partial English translation in Society.
B.A.Trakhtenbrot, 1984, “A Survey of Russian Approaches to
Look up this entry topic at the Indiana Philosophy Ontology
Perebor (Brute-force Search) Algorithms,” IEEE Annals of the
Project (InPhO).
History of Computing, 6(4): 384–400.
Enhanced bibliography for this entry at PhilPapers, with links
Markov, A.A., 1960, “The Theory of Algorithms,” American
to its database.
Mathematical Society Translations (Series 2), 15: 1–14.
Meyer, Albert and Dennis Ritchie, 1967, “The Complexity of Loop
Programs.” Proc. 22nd National ACM Conference, Washington,
Other Internet Resources
D.C., 465–470.
Descriptive Complexity: a webpage describing research in
Nordström, Jakob, 2015 “On the Interplay Between Proof Complexity and
Descriptive Complexity which is Computational Complexity from a
SAT Solving,” SIGLOG News, 2(3):18–44.
Logical Point of View (with a diagram showing the World of
Papadimitriou, Christos H., 1994, Computational Complexity, Reading,
Computability and Complexity). Maintained by Neil Immerman,
MA: Addison-Wesley.
University of Massachusetts, Amherst.
Péter, Rózsa. 1967, Recursive Functions, translated by István Földes, New
Mass, Size, and Density of the Universe, from the National Solar
York: Academic Press.
Observatory/Sacramento Peak.
Post, Emil, 1936, “Finite Combinatory Processes – Formulation I” Journal
of Symbolic Logic, 1: 103–105.
Related Entries
Rogers, Hartley Jr., 1967, Theory of Recursive Functions and Effective
Computability, New York: McGraw-Hill. Church, Alonzo: logic, contributions to | Church-Turing Thesis |
Turing, A. M., 1936–7, “On Computable Numbers, with an Application to Computational Complexity Theory | function: recursive | Gödel, Kurt |
the Entscheidungsproblem”, Proceedings of the London quantum theory: quantum computing | set theory | Turing, Alan | Turing
Mathematical Society, 2(42): 230–265 [Preprint available online]. machines
van Heijenoort, Jean , ed., 1967, From Frege To Gödel: A Source Book in
Mathematical Logic, 1879–1931, Cambridge, MA: Harvard Copyright © 2015 by the author
University Press. Neil Immerman
Whitehead, Alfred North and Bertrand Russell, 1910, Principia
Mathematica, Cambridge: Cambridge University Press.
Academic Tools