KEVIN T. KELLY: Research and Selected Bibliography

OVERVIEW: CONVERGENT RELIABILITY

What primarily interests me is what scientific method has to do with or could have to do with getting the right answers to our questions about the world.  Standard philosophy of science sidesteps this question by asking, instead, about the meaning of ``justification'' and ``rationality''; a different matter entirely.  I put the former question front and center at the expense of the second.  In this respect, my approach to epistemology closely parallels work in theoretical computer science and the foundations of mathematics, in which the central question is existence of a reliable procedure for finding the right answer to a question.  Thomas Kuhn emphasized that paradigm shifts involve an exchange of figure and ground.  Imposing the routine standards of computability theory on the shifting sands of epistemology amounts to a major paradigm shift in just that respect.  Or so I have argued at length in the following sequence of works.

``The Logic of Success''British Journal for the Philosophy of Science, special millennium issue, 51, 2001, 639-666.
[Ugly MS Word file (BJPS requires it)]

This is the paper to read if you only read one.  It portrays computational learning theory as an alternative paradigm for the philosophy of science.  Topics covered include underdetermination as complexity, the solution of infinite epistemic regresses of the sort that arise in naturalistic philosophies of science, and a priori, transcendental deductions of the central features of Kuhnian historiography from the logic of convergence.  This is the most recent, general overview of my position, except that I could only hint at the results I obtained later in ``A Close Shave with Realism''.
The Logic of Reliable Inquiry, Oxford: Oxford University Press, 1996.
[HTML file containing analytical table of contents only]
This my  most comprehensive presentation of computational learning theory as a nonstandard foundation for the philosophy of science.  Click on the title for the analytical contents of the book.  The first six papers listed below came out after the book, however, and hence are not covered.
``Learning Theory and Epistemology'',  forthcoming in  Handbook of Epistemology, I. Niiniluoto, M. Sintonen, and J. Smolenski, eds.  Dordrecht: Kluwer, 20
pages.
[PDF file]
A review of standard learning theory results for epistemologists.
(with O. Schulte and C. Juhl) ``Learning Theory and the Philosophy of Science'', Philosophy of Science 64: 1997, pp. 245-267.
[PDF file]
Position piece superceded by ``The Logic of Success''.
``Reichenbach, Induction, and Discovery'',  Erkenntnis, 35, 1991, pp. 123-149.
Develops my own position out of Reichenbach's by relaxing his assumptions; especially the assumption that all induction concerns probability!

INFINITE REGRESSES: CUT DOWN TO SIZE

Since Aristotle, philosophers have shunned infinite regresses in favor of  foundations and (more recently) circles.  Thus, epistemologists count themselves either as foundationalists or as coherentists (i.e., circle-ists).  One of the fruits of running opposite to the ``justificationist'' crowd is a systematic perspective on the power and significance of reliabilistic regresses.  This study also responds to a standard objection to reliability analyses of knowledge, namely, that one cannot reliably determine whether one is actually reliable (i.e., whether one's background assumptions are, in fact, satisfied).

``How to Do Things with an Infinite Regress'', completed manuscript.
[PDF file]

A fundamental problem for naturalistic epistemology is that reliability does not seem sufficient for knowledge if one has no reason to believe one is reliable.  This is often taken as an argument for coherentism.  I respond in a different way: I invoke a methodological regress by asking another method to check whether your method will actually succeed.  If the question arises again, invoke another method to check the success of the second, and so forth.  Then I solve for the intrinsic worth of infinite methodological regresses.   The idea is to find the best single-method performance that an infinite regress of methods could be reduced to, in the sense that the single method receives as in inputs only the successive outputs or conjectures of the methods in regress.  I solve several different kinds of regresses in this way, with interesting observations about the viability of  K. Popper’s response to Duhem’s problem.
``Naturalism Logicized'', in  After Popper, Kuhn and Feyerabend: Current Issues in Scientific Method, R. Nola and H. Sankey, eds, 34 Dordrecht: Kluwer, 2000, pp. 177-210.
[PDF file]
 
Contains the proofs for ``How to Do Things with an Infinite Regress'' and motivates the problem of solving infinite reliability regresses by referring to Larry Laudan's ``normative naturalism'' program for the philosophy of science, which urges us to check the instrumentality of new scientific methods by using old ones.

REALISM: HAVING YOUR CAKE AND EATING IT

Realists recommend choosing simple, unified theories.  Anti-realists respond that such theories may, nonetheless, be wrong.  Not much is added to the debate on subsequent iterations.  I argue that, after all,  the two positions are consistent: the simpler theory may be wrong, but one must choose it on pain of converging to the corrrect answer less efficiently in the worst case.  The argument provides an anti-realist foundation for realism.

``A Close Shave with Realism: Ockham's Razor Derived from Efficient Convergence'',  completed manuscript.
[PDF file]

One of my best papers.  Based on an idea due to learning theorists R. Freivalds and C. Smith, I isolate a ``pure'' Ockham principle of which minimizing existential commitment, minimizing polynomial degree, finding the most restrictive consiervation laws, and optimizing theoretical unity are instances.  Then I show that choosing the Ockham hypothesis is necessary for minimizing the number of retractions or errors in the worst case prior to converging to the right answer.  I also show that following Ockham's principle is also sufficient for error minimization but is not sufficient for retraction efficiency.  Retraction efficiency is equivalent to the principle that one must retain one's current hypothesis until a ``surprise'' occurs.  These results are pertinent to the ``realism debate'' because the Ockham principle must be satisfied (as the realist insists) for efficiency's sake even though the Ockham hypothesis might very well be wrong (the anti-realist's point).  The key to the study is a topologically invariant notion of ``surprise complexity'' which characterizes the least worst-case transfinite bound achievable in answering a given empirical question.
(with C. Juhl) ``Realism, Convergence, and Additivity'',   Proceedings of the 1994 Biennial Meeting of the Philosophy of Science Association, D. Hull, M. Forbes, and R. Burian, eds., East Lansing: Philosophy of Science Association, 1994, pp. 181-190.
In this paper, we argue that Bayesian ``measure one'' convergence theorems allow for dogmatic exclusion of possibilities that would prevent one from solving the problem if they weren't ignored, which is biased against the anti-realist's position.  The paper is superceded by Chapter 13 of  The Logic of Reliable Inquiry (above).

RELATIVISM: WHO'S AFRAID OF BIG BAD RORTY?

It is widely thought that convergence is incompatible with ``meaning variance'' because if truth changes as a function of our methodological decisions and beliefs, there is no fixed target for inquiry to hit.  But skeet shooters have long known that it is possible to hit a moving target.  The logical analogue of this idea is a logic of discovery in which the truth changes in response to the learner's acts so that relative knowledge is a fixed point in which inquiry eventually locks on to the state of always believing what happens to be true at the time.

(With C. Juhl and C. Glymour), ``Reliability, Realism, and Relativism'', in Reading Putnam, P. Clark, ed., London:  Blackwell, 1994, 98-161.
[ugly MS Word file]

This paper emerged from my participation in the conference at St. Andrews celebrating Hilary Putnam's Gifford lectures.  Putnam invented computational learning theory in a critique of Carnap's confirmation theory.  In this paper, I provide some criticism of the morals Putnam drew from his result and apply similar techniques to show that internal realist truth is incomplete.  I tended to view Putnam's views on internal realism as an outgrowth of his learning theoretic ideas (which also show up in his technical work in mathematical logic).  At the Gifford conference he surprised me by saying that he viewed the three ideas as being entirely separate.  A major regret is that I submitted the paper late so that Putnam did not have a chance to respond to it in the volume.  [The online ms is in a junky MS Word format.]
(with C. Glymour) ``Inductive Inference from Theory Laden Data'', Journal of Philosophical Logic, 21, 1992, pp. 391-444.
This was an attempt to respond to Kuhnian worries from a learning theoretic perspective.  A more mature perspective on the material is presented in chapter 16 of The Logic of Reliable Inquiry.  I now think that the interpretation of Kuhn in ``The Logic of Success'' is both mathematically more tractable and a better fit to Kuhn's relatively tame views.  Working on this paper is what convinced me to turn to topology as the fundamental framework for understanding the problem of induction, but the perspective does not yet appear in the paper.

BELIEF REVISION THEORY: THE GOOD, THE BAD, AND THE UGLY

Since the days of Carneides the skeptic, everyone has thought that inductive uncertainty should be accompanied by partial degrees of belief.  Moreover, Bayesians tend to reject worst-case reasoning in favor of expected-case results or measure one results (e.g., the Bayesian ``washing out of the prior'' theorems).  But some Bayesians have conceded that every statistical model involves ``full beliefs'' that may be false (e.g., maybe the coin flips aren't independent after all).  Bayesian updating can't handle conditioning on information that refutes the sample space, since it can only restrict the given space.  Belief revision theory is supposed to extend Bayesian rationality to the ``troublesome'' case of ``belief-contravening'' updates.  But these would-be extended Bayesians now find themselves without a sample space in which to conduct expected case reasoning about the convergence prospects of a method that directs changes of full belief in light of sequentially presented inputs.  Since the advocates of rationality came this close to computational learning theory on their own, I decided to oblige them further by performing a worst-case reliability analysis of their proposed methods, resulting in sharp, short-run recommendations for how to fix some of the methods to make them as reliable as possible.

``Iterated Belief Revision, Reliability, and Inductive Amnesia,'' Erkenntnis, 50, 1998 pp. 11-58.
[pdf file without figures (Miktex problem)]

This is one of my best papers.  I took the most recent proposals for iterated belief revision that have come out of the philosophical and artificial intelligence communities (e.g., W. Spohn, J. Pearl, C. Boutillier) and asked what none of their proponents has asked: do they help or hinder the search for truth?  Using generalized versions of N. Goodman’s “grue” predicate, I compare the learning powers of the proposed methods.  It turns out that some of the methods are subject to ``inductive amnesia'', meaning that they can either predict the future or remember the past but not both!  The resulting analysis implies surprisingly strong short-run recommendations concerning the proposed methods, providing useful side-constraint on belief-revision proposals.  [The  figures are missing online because Miktex doesn't support the figure package I was using in Oztex.]
``The Learning Power of Iterated Belief Revision'', in  Proceedings of the Seventh TARK Conference Itzhak Gilboa, ed., 1998, pp. 111-125.
[PDF file]
A crisp precis of the preceding results, with a cute example from aerodynamics.
(with O. Schulte and V. Hendricks) ``Reliable Belief Revision'', in  Logic and Scientfic Methods, M. L. Dalla Chiara, et al., eds.  Dordrecht: Kluwer, 1997.
[PDF file]
My first investigation of belief revision theory.  Some nice observations and distinctions, but no negative results.  Still, it laid the necessary groundwork for hooking learning theory up to belief revision theory, without which the preceding papers wouldn't have been possible.

TURING MEETS HUME: WHY RELATIONS OF IDEAS ARE MATTERS OF FACT

Kuhn teaches that a single, deep success suffices to keep a competing paradign on the table.  Not surprisingly, computational learning theory shows its superiority over ideal theories of rationality when we trade in our ideal agents for more realistic, computable agents.  The foundation of the deep success is a strong structural analogy between the halting problem and the problem of inductive generalization, allowing for a unified treatment of both, from the ground up.  One consequence of the approach is that one can often show that computable agents are forced to choose between ideal rationality and finding the right answer.  I say ``so much the worse for ideal rationality''.  Another is that there are learning problems that cannot be solved by computational means unless the Humean barrier between theorem proving and the external, empirical data is torn down.

(with O. Schulte) ``Church's Thesis and Hume's Problem,'' in  Logic and Scientific Methods, M. L. Dalla Chiara, et al., eds. Dordrecht: Kluwer, 1997, pp. 383-398.
[PDF file]

Argues that uncomputability is just a species of the problem of induction so that uncomputability should be taken seriously from the ground up in a unified theory of computable inquiry.
(with O. Schulte) ``The Computable Testability of Theories with Uncomputable Predictions'',  Erkenntnis 43: 29-66, 1995, 29-66.
This paper, which was reviewed in the The Journal of Symbolic Logic, 61:  #3, p.1049., solves for how deductively complex a theory can be if a computer is to determine its truth in a given sense.  Conversely, we solve for how hard it can be to determine the truth of a theory with a given deductive complexity.  The most surprising result is that it can be possible for a computer to refute a hypothesis with certainty (a la Popper) even though the predictions of the theory are infinitely uncomputable (i.e., not even hyper-arithmetically definable).  A corollary is that even though a Turing machine can refute the hypothesis with certainty, a computable Bayesian cannot even gradually converge to the truth value of the hypothesis.  The results are analogous to, but not the same as, the basis theorems of recursion theory.

COGSCI

(With C. Glymour) ``Why You'll Never Know whether Roger Penrose is a Computer'', Behavioral and Brain Sciences, 13, 4, Dec. 1990.
I observe that the computability of human behavior is an empirical rather than a metaphysical or mathematical question.  The main result is that we can verify (in the limit) that we are computers if and only if we are not computers, which I call an ``empirical paradox''.  The argument is reviewed in The Logic of Reliable Inquiry.   Penrose responded in his sequel but didn't seem to get the point, in my opinion.
``Effective Epistemology, Psychology, and Artificial Intelligence'', Acting  and Reflecting, Wilfried Sieg, ed.,  New York: D. Reidel, 1990, pp. 115-126.
My first duty as an assistant professor (certainly not my idea) was a public debate on the merits of artificial intelligence with my late colleague and Nobel laureate Prof. Herb Simon.  By the end of the ``debate'' I learned never to underestimate Herb Simon!  This is the write-up of the talk, published with Simon's pointed response.  My thesis, which I still maintain, was that the best AI can be viewed as using procedures to explicate vaguely specified problems rather than as formulating efficient solutions to the problems those procedures solve.
(With C. Glymour) ``Thoroughly Modern Meno'', in  Inference, Explanation, and other Frustrations, John Earman, ed., University of California Press, 1992, pp. 3-23.
Plato as the first computational learning theorist.