13. Cubbyhole Mathematics

Kevin T. Kelly
Department of Philosophy
Carnegie Mellon University

The arithmetical hierarchy provides us with lots of cubbyholes.  Recursion theory seeks understanding of objects by putting them in the right holes.  For when we do so, we know what the "main part" of the problem is.  Problems within the class will be computationally trivial variants of one another.

Here are some examples.  Some of them are familiar from chapter 6.

Upper Bounds

Upper bounds are easily found by the Tarski-Kuratowski algorithm:
  1. Define the relation in terms of the Kleene predicate.
  2. Put the definition into prenex normal form.
    1. First eliminate arrows using conjunctions, disjunctions and negations.
    2. Then drive in all negations using DeMorgan's rules.
    3. Then rename quantified variables so that they are all distinct in order to prevent clashes when the quantifiers are exported.
    4. Then export quantifiers to the front of the formula in the most advantageous way (i.e., interleave them to minimize alternations without permuting the order of any quantifiers that were already nested.
  3. The first quantifier determines whether the complexity class is S or  P.
  4. The number of blocks of quantifiers of the same type determines the subscript.
Examples:
 
Tot(x) Û W[x] = N
Û "z W[x](z)
Û "z $w U(x, d(w, 0), d(w, 1), <z>)
Û "$R, with R recursive.
So P2(Fin).
Fin(x) Û W[x] is finite
Û $y"z ³ y ØW[x](z)
Û $y"z ³ y f[x](z)­
Û $y"z ³ y"w ØU(x, d(w, 0), d(w, 1), <z>)
Û $"R, with R recursive.
So S2(Fin).
These were automatic!  Let's do one that requires a little bit of shuffling.
Ident(x) Û W[d(x, 0)] = W[d(x, 1)]
Û "z (W[d(x, 0)](zÛ W[d(x, 1)](z))
Û "z [(W[d(x, 0)](z) Ù W[d(x, 1)](z)) Ú (ØW[d(x, 0)](z) Ù ØW[d(x, 1)](z))]
Û "z [($w U(d(x, 0), d(w, 0), d(w, 1), <z>) Ù $w U(d(x, 1), d(w, 0), d(w, 1), <z>)) Ú ("w ØU(d(x, 0), d(w, 0), d(w, 1), <z>)Ù "w ØU(d(x, 1), d(w, 0), d(w, 1), <z>))]
Û "[($ Ù $) Ú (" Ù ")]  (notice, the lead " makes it most efficient to put all the "  quantifiers first).
Û """$$.
So P2(Ident).
Here is a more complicated one that makes use of work already done.
Simp(x) Û W[x] is simple
Û W[x] is infinite Ù "y, W[y] is infinite Þ W[x] Ç W[y] ¹ Æ
Û Inf(x) Ù "y, Inf(y) Þ "z (ØW[x](z) Ú ØW[y](z))
Û Inf(x) Ù "y (ØInf(y)Ú "z ("w ØU(x, d(w, 0), d(w, 1), <z>)Ú "wØU(y, d(w, 0), d(w, 1), <z>)))
Û "$Ù " (Ø"$Ú "(" Ú "))
Û "$Ù " ($"Ú "(" Ú "))
Û "$Ù " ($"Ú """))
Û "$Ù " ($"""")
Û "$Ù "$""""
Û ""$$""""
Û "$".
So P3(Simp).
Exercise 1:  do five more examples.  Include Comp.

Lower Bounds

Lower bounds come by several techniques: diagonalization, reduction, or direct completeness arguments.  We have already seen two ways to do diagonalization in the last chapter, providing us with "seed" for reduction arguments.  Let's begin with a direct completeness argument.

Proposition 1:  Fin is S2-complete, Inf is P2-complete.

Proof:  Suppose S2(P).  So for some recursive R, we have for each x,

P(x) Û $y"zR(<x, y, z>).
Define
y(x, w) =  "y£ w$z ØR(<x, y, z>).
This is partial recursive by the projection theorem.  Apply the s-m-n theorem to obtain
f[f(x)](w) = y(x, w).
Hence,
ØP(x)Þ W[f(x)] is total Þ Tot(x) Þ  Inf(x);
P(x)Þ W[f(x)] is finite Þ Fin(x).
-|

Notice that the reduction shows more than we intended.  It projects ØP into Tot and P into Fin. Following Soare, we may summarize this situation by the notation:

(P, ØP) £m  (Fin, Tot).
When P stands for an arbitrary S2 set, we may abbreviate the situation by writing
(S2, P2) £m  (Fin, Tot).
Since Fin is in the complement of  Tot, this reduction also establishes:

Corollary 2:  Tot is P2-complete.
-|

Proposition 3:  Subset is P2-complete.

Exercise 2:  Prove the upper bound by Tarski-Kuratowski computation prove the lower bound by reduction of Tot or Infs.  Hint: make the "subset" index be for N and make the "superset" index depend on the given number.

Exercise 3:  Peg the complexity of Onto in the hierarchy.  No hints this time.

At the next level of complexity lower bounds become more complex.

Proposition 4: (S3, P3) £m (Cof, Comp) £m(Rec, Comp).
Corollary: Cof, Rec are S3-complete.

Proof: Suppose S3(P).  So for some S3 set R, we have for each x,

P(x) Û $yR(<x, y>).
We have already shown that there exists a total recursive g such that
R(<x, y>) Û W[g(<x, y>)] is infinite.
Hence
P(x) Û $yR(<x, y>)Û $y W[g(<x, y>)] is infinite.
We need to construct total recursive g such that
P(x) Þ W[g(x)] is cofinite and
ØP(x) Þ W[g(x)] º K.
I sketch the construction, showing how to enumerate S = W[g(x)] as a function of x:
We start out with an array of "pointers" on the natural numbers labelled with the natural numbers.
Let pointerx(y, s) be the number pointed to by the pointer labelled with y at stage s.
At stage s = 0, the yth pointer is initialized to point to number y: i.e., pointerx(y, 0) = y.
At stage  s + 1:
Check for each y £ s whether either of the following occur:
 K(y) halts in exactly s steps (this can happen only once) or
$w £ s W[f(<x, y>)](w) halts in exactly s steps (this happens infinitely often for some y just in case P(x)).
For each such y, add pointerx(y, s) to S.
Now move all pointers to the right, without permuting them, so that positions already added to S are not pointed to, but without leaving any other gaps.
P(x) Þ $y W[f(<x, y>)] is infinite
Þ $y lims pointerx(y, s) = w
Þ S = W[g(x)] is cofinite
Þ Cof(g(x)).

ØP(x) Þ "y W[f(<x, y>)] is finite
Þ "y lims pointerx(y, s) is finite
Þ "y lims pointerx(y, s) is finite
Hence each terminal pointer position is missing from S.
Using the fact that each pointer bumps only finitely often, we can compute, given S, the least stage s(y) such that
pointer(y, s(y)) = lims pointerx(y, s).  In other words, s is recursive in S.
To decide whether K(y), check whether pointerx(y, s) ever bumped (prior to stage s(y)) when nothing new came out of the W[f(<x, y>)] simulation.  In other words, the convergence of  the pointer, which is recoverable from S, bounds the search through the reasons for adding y to S.
Hence, S = W[g(x)] º K, so Comp(g(x)).
Apply Church's thesis!
-|
So recursiveness is worse than finiteness or infinity, but is the same as co-finiteness.  Non-recursiveness is therefore the same as co-infinity.

How about Comp?  The obvious Tarski-Kuratowski computation of Comp takes us only down S4, unfortunately, so our previous reduction doesn't lock in the complexity of Comp.  Yates showed that Comp is S4-complete.  He proves it via his  "thickness" lemma, which is proved by an infinite injury argument.

Exercise 4:  This is from Soare, ex. 3.8.  Define

Ext(x) Û f[x] is extendable to a total recursive function.
Show that Ext is S3-complete. Hint: use the preceding technique.  Instead of building an r.e. set, W[g(x)], build a partial recursive function f[g(x)].  When W[f(<x, y>)](w) halts in exactly s steps, define f[g(x)](pointerx(y, s)) to be some value (e.g., 0).  Also, if f[y](pointerx(y, s)) is obserserved to halt in s steps, define f[g(x)](pointerx(y, s)) to have a value different from f[y](pointerx(y, s)).  The resulting function is guaranteed to be partial recursive.  If some W[f(<x, y>)] is infinite, the function will itself be total recursive.  If no W[f(<x, y>)] is infinite, the function differs somewhere from each total recursive function.  (This takes some arguing:  how do you know that the y pointersits around long enough to ensure that the constructed function differs from f[y]?)

Arithmetical Truth

Let A = the set of all Gödel numbers of true sentences of arithmetic.

Gödel's incompleteness construction shows only that A is productive.  As we have seen, productive sets are merely co-r.e. hard (i.e., non-r.e. and non-co-r.e.-incomplete.  This is fairly weak, since it is consistent with A being co-r.e.  If A were co-r.e., there would be an effective refutation procedure for A, which would tell us for each non-truth that it is false.  A little thought reveals, however, that this is false.

In fact, A is not even arithmetical (i.e., it diagonalizes the whole hierarchy).  We will see this as follows.
Recall from the Gödel and uncomputability class that Q is a weak, incomplete system of arithmetic.

A relation R(x) is representable in Q just in case there exists a formula F(x) in the language of arithmetic such that for all b,

  1. R(b) Þ Q |- F(b) Ù
  2. ØR(b) Þ Q |- ØF(b)
The main lemma en route to Gödel's incompleteness theorem is the representation theorem, which we will not reprove here:

Proposition 5 (Gödel):  Each recursive relation is representable in Q.

This implies that each r.e. predicate has a purely existential definition in the language of Arithmetic.

Proposition 6:  A is not arithmetical.

Proof:  Let Sn(P).  Then for some recursive relation R, we have
P(x) Û $y1 ... Qyn R(x, y1,..., yn).  Using arithmetical representability, choose formula F representing R.  Then:

P(x) Û $y1 ... Qyn F(x,y1,..., yn)
Û the sentence "$y1 ... Qyn F(x,y1,..., yn)" is true in arithmetic
Û A(<$y1 ... Qyn F(x,y1,..., yn)>).
By Church's thesis, there is a total recursive f such that for all x,
f(x) = <$y1 ... Qyn F(x,y1,..., yn)>.
Thus P £mA.
Hence, for each n, 0(n + 1) £mA.
So by Post's theorem, for each n, ØSn(A).
-|

We can say a more than this.  We know what S(n) means.  What should the limit jump S(w) mean?  It should somehow reduce all the  preceding jumps.  The following notion will do:

S(w)(x) Û S(d(x, 1))(d(x, 0)).
Now we may state:

Proposition 7:  Aº 0(w).

Proof:  (Rogers Thm X, p. 318).
-|

A question for us later:  if A is not arithmetical, then where does A live?