9. Uncomputability and the Problem of Induction

Kevin T. Kelly
Department of Philosophy
Carnegie Mellon University

It seems, intuitively, that the halting problem is about machines not being able to see the futures of other machines.  Think of Hume's problem of induction:  no matter how many times the sun has risen, it might not rise tomorrow.  Just replace "the program hasn't halted" for "the sun has risen".  My Dell computer often raises this difficulty for me.  The Windows 98 operating system sometimes acts like it is hung.  But once I waited five minutes and it actually came back.  How long do you wait?  Whatever "waiting time" you preordain as enough, Windows 98 might require one second more.

Everybody thinks of it that way.  And yet, the proof that the halting problem is non-verifiable is just a motorized version of Cantor's static, diagonal argument.  So is all the philosophy of science talk really irrelevant, once you have the right attitude beaten into you?  Or does the usual  presentation miss something important?

Here's another way to do it.  Classical skeptical arguments involve a "demon" who feeds data to the scientist in the most misleading possible way.  Given a scientist who claims to have a method for verifying that the sun will always rise tomorrow, the demon simply resolves to feed the scientist data in which the sun always rises until the scientist concludes that it always will.  If the scientist never does so, the demon continues to make the sun rise, and the scientist fails to verify what is true.  If the scientist does finally succumb to the evidence, the demon presents a day on which the sun does not rise, so the scientist verifies what is false.  Either way, the scientist loses.

Can we do the same thing to an arbitrary program index i who claims to be able to verify the halting problem?  Let's try the analogous thing.  We need a computable demon index j who is dedicated to fooling our given method i.  Our index j simulates the computation of  i on input j (the demon's own index) and refuses to halt on itself until it sees in its simulation that i has become sure from inspecting j that j won't halt on itself.  Then j halts on itself.  Stop now and compare this to the preceding skeptical argument.

The story I just told involves self-reference.  To make it rigorous requires the Kleene recursion theorem.  We carry it off using the universal, s-m-n, and fixed point results as follows.  Let a hopeful "sucker" index i that wishes to verify ØK be given.  To construct an index for a demon devoted to fooling i, set up a universal construction that halts only when i halts on a given index j.  By Kleene's recursion theorem, j will miraculously become the demon's own index.  The following will do:

y(j, x) »mz U(i, d(z, 0), d(z, 1), <j>).
This is partial recursive, so there is an n such that
f[n](j, x) »y(j, x).
Now apply s-m-n to j to obtain total recursive s such that for all j, i, x:
f[s(n, j)](x) » f[n](j, x).
Fix n as a constant in the construction:
g(j) = s(n, j).
Now by the Kleene recursion theorem, there exists an index d such that
f[g(d)] = f[d].
This is our demon index.  Then we have
(*) f[d](d) » f[g(d)](d) » f[s(n, d)](d) » f[n](d, d) » y(d, d) » mz U(i, d(z, 0), d(z, 1), <d>).
Thus,  d is just like the skeptical demon.  It halts on itself (thereby falsifying the hypothesis ØK(d)) just in case i halts on d, indicating that i is sure that d won't halt on itself:
f[i](d)¯ Û f[d](d)¯
Û K(d).
Thus,  i does not verify ØK.

The standard, Cantorian proof works only for problems of a special form.  This "skeptical" procedure is both more intuitive and much more flexible. The skeptical approach is therefore the first approach to try for  negative results concerning problems based on partial recursive indices.

Exercise 1:  Use the demonic technique to establish three of the negative results from proposition 7.3.  Give an intuitive interpretation of each construction.

Skeptical Argument for Rice's Theorem

The same idea can be used to give a direct, epistemic arguments for Rice's theorem.  Notice that in the following argument, an arbitrary would-be decision procedure is shown to fail on a demonic index that "internally" outsmarts it.

Demonic proof:  Suppose that G is nontrivial.
Case I:  Æ Î G.
Then there is some y Î Part - G.
Given "sucker" index n, we construct a demon index d that refuses to halt on any input until f[n] takes the bait and concludes that f[d] = Æ Î G.
Thereafter, the demon starts producing outputs agreeing with y.
Using the universal construction, define

d(x, y) » o(mz U(n, d(z, 0), 1, <x>)) + y(y).
Choose an index, and apply s-m-n to obtain total recursive g such that:
f[g(x)](y) » d(x, y).
Applying the Kleene recursion theorem, we obtain a d such that:
f[d](y) » f[g(d)](y) » d(g(d), y) » o(mz U(n, d(z, 0), 1, <d>)) + y(y).
Hence,
f[n](d) » 1 Þ  f[d](y) = y Ï G.
Ø(f[n](d) » 1)  Þ  f[d](y) = Æ Î G.
So f[n] screws up on d no matter what.

Case II:  Substitute 0 for 1 in case I.
-|

Skeptical Argument for Rice-Shapiro

Here the skeptical structure is even more interesting.

Demonic proof of proposition 7.7:
Suppose that G is not experimentally verifiable.
By lemma 6 of the preceding chapter,
Ø("y Î Part, y Î G Û $ finite q Í y, q ÎG).
Case I: $y Î Part, y ÎG Ù " finite q Í y, q Ï G.
We need a demon that produces more and more of y so long as the "sucker" f[n] hasn't taken the bait and concluded that the demon f[d] is y ÎG.
Then as soon as our "sucker" takes the bait, the demon stops producing any more outputs of y, so the demon is some finite q Ï G.
I follow the usual pattern.  Define:

d(x, y) » o(mz [d(z, 1) > yÙ ØU(n, d(z, 0), d(z, 1), <x>))] + y(y).
Choose an index, and apply s-m-n to obtain total recursive g such that:
f[g(x)](y) » d(x, y).
Applying the Kleene recursion theorem, we obtain a d such that:
f[d](y) » f[g(d)](y) » d(d, y) » o(mz ØU(n, d(z, 0), d(z, 1), <d>)) + y(y).
Hence,
f[n](d) » 1 Þ  f[d](y) = y ÎG.
Ø(f[n](d) » 1) Þ $ finite q  f[d](y) = q Ï G.
So f[n] screws up on d no matter what.

Exercise 2:  Give the demonic argument for case II.