KEVIN T. KELLY: Recent Publications

Books

The Logic of Reliable Inquiry, Oxford: Oxford University Press, 1996.
[HTML file containing analytical table of contents only]
This my  most comprehensive presentation of computational learning theory as a nonstandard foundation for the philosophy of science.  Click on the title for the analytical contents of the book.  The first six papers listed below came out after the book, however, and hence are not covered.

Articles

``A Close Shave with Realism: Ockham's Razor Derived from Efficient Convergence'',  completed manuscript.
[PDF file]
One of my best papers.  Based on an idea due to learning theorists R. Freivalds and C. Smith, I isolate a ``pure'' Ockham principle of which minimizing existential commitment, minimizing polynomial degree, finding the most restrictive consiervation laws, and optimizing theoretical unity are instances.  Then I show that choosing the Ockham hypothesis is necessary for minimizing the number of retractions or errors in the worst case prior to converging to the right answer.  I also show that following Ockham's principle is also sufficient for error minimization but is not sufficient for retraction efficiency.  Retraction efficiency is equivalent to the principle that one must retain one's current hypothesis until a ``surprise'' occurs.  These results are pertinent to the ``realism debate'' because the Ockham principle must be satisfied (as the realist insists) for efficiency's sake even though the Ockham hypothesis might very well be wrong (the anti-realist's point).  The key to the study is a topologically invariant notion of ``surprise complexity'' which characterizes the least worst-case transfinite bound achievable in answering a given empirical question.
``How to Do Things with an Infinite Regress'', completed manuscript.
[PDF file]
A fundamental problem for naturalistic epistemology is that reliability does not seem sufficient for knowledge if one has no reason to believe one is reliable.  This is often taken as an argument for coherentism.  I respond in a different way: I invoke a methodological regress by asking another method to check whether your method will actually succeed.  If the question arises again, invoke another method to check the success of the second, and so forth.  Then I solve for the intrinsic worth of infinite methodological regresses.   The idea is to find the best single-method performance that an infinite regress of methods could be reduced to, in the sense that the single method receives as in inputs only the successive outputs or conjectures of the methods in regress.  I solve several different kinds of regresses in this way, with interesting observations about the viability of  K. Popper’s response to Duhem’s problem.


``Learning Theory and Epistemology'',  forthcoming in  Handbook of Epistemology, I. Niiniluoto, M. Sintonen, and J. Smolenski, eds.  Dordrecht: Kluwer, 20
pages.
[PDF file]

A review of standard learning theory results for epistemologists.
``The Logic of Success''British Journal for the Philosophy of Science, special millennium issue, 51, 2001, 639-666.
[Ugly MS Word file (BJPS requires it)]
This is the paper to read if you only read one.  It portrays computational learning theory as an alternative paradigm for the philosophy of science.  Topics covered include underdetermination as complexity, the solution of infinite epistemic regresses of the sort that arise in naturalistic philosophies of science, and a priori, transcendental deductions of the central features of Kuhnian historiography from the logic of convergence.  This is the most recent, general overview of my position, except that I could only hint at the results I obtained later in ``A Close Shave with Realism''.
``Naturalism Logicized'', in  After Popper, Kuhn and Feyerabend: Current Issues in Scientific Method, R. Nola and H. Sankey, eds, 34 Dordrecht: Kluwer, 2000, pp. 177-210.
[PDF file]
Proves results listed in ``How to Do Things with an Infinite Regress'' and motivates the problem of solving infinite reliability regresses by referring to Larry Laudan's ``normative naturalism'' program for the philosophy of science, which urges us to check the instrumentality of new scientific methods by using old ones.
``Iterated Belief Revision, Reliability, and Inductive Amnesia,'' Erkenntnis, 50, 1998 pp. 11-58.
[pdf file without figures (Miktex problem)]
This is one of my best papers.  I took the most recent proposals for iterated belief revision that have come out of the philosophical and artificial intelligence communities (e.g., W. Spohn, J. Pearl, C. Boutillier) and asked what none of their proponents has asked: do they help or hinder the search for truth?  Using generalized versions of N. Goodman’s “grue” predicate, I compare the learning powers of the proposed methods.  It turns out that some of the methods are subject to ``inductive amnesia'', meaning that they can either predict the future or remember the past but not both!  The resulting analysis implies surprisingly strong short-run recommendations concerning the proposed methods, providing useful side-constraint on belief-revision proposals.  [The  figures are missing online because Miktex doesn't support the figure package I was using in Oztex.]
``The Learning Power of Iterated Belief Revision'', in  Proceedings of the Seventh TARK Conference Itzhak Gilboa, ed., 1998, pp. 111-125.
[PDF file]
A crisp precis of the preceding results, with a cute example from aerodynamics.
(with O. Schulte and V. Hendricks) ``Reliable Belief Revision'', in  Logic and Scientfic Methods, M. L. Dalla Chiara, et al., eds.  Dordrecht: Kluwer, 1997.
[PDF file]
My first investigation of belief revision theory.  Some nice observations and distinctions, but no negative results.  Still, it laid the necessary groundwork for hooking learning theory up to belief revision theory, without which the preceding papers wouldn't have been possible.
``Iterated Belief Revision, Reliability, and Inductive Amnesia,'' Erkenntnis, 50, 1998 pp. 11-58.
[pdf file without figures (Miktex problem)]
This is one of my best papers.  I took the most recent proposals for iterated belief revision that have come out of the philosophical and artificial intelligence communities (e.g., W. Spohn, J. Pearl, C. Boutillier) and asked what none of their proponents has asked: do they help or hinder the search for truth?  Using generalized versions of N. Goodman’s “grue” predicate, I compare the learning powers of the proposed methods.  It turns out that some of the methods are subject to ``inductive amnesia'', meaning that they can either predict the future or remember the past but not both!  The resulting analysis implies surprisingly strong short-run recommendations concerning the proposed methods, providing useful side-constraint on belief-revision proposals.  [The  figures are missing online because Miktex doesn't support the figure package I was using in Oztex.]
``The Learning Power of Iterated Belief Revision'', in  Proceedings of the Seventh TARK Conference Itzhak Gilboa, ed., 1998, pp. 111-125.
[PDF file]
A crisp precis of the preceding results, with a cute example from aerodynamics.
(with O. Schulte and V. Hendricks) ``Reliable Belief Revision'', in  Logic and Scientfic Methods, M. L. Dalla Chiara, et al., eds.  Dordrecht: Kluwer, 1997.
[PDF file]
My first investigation of belief revision theory.  Some nice observations and distinctions, but no negative results.  Still, it laid the necessary groundwork for hooking learning theory up to belief revision theory, without which the preceding papers wouldn't have been possible.
 
(with O. Schulte and C. Juhl) ``Learning Theory and the Philosophy of Science'', Philosophy of Science 64: 1997, pp. 245-267.
[PDF file]
Position piece superceded by ``The Logic of Success''.
(with O. Schulte) ``Church's Thesis and Hume's Problem,'' in  Logic and Scientific Methods, M. L. Dalla Chiara, et al., eds. Dordrecht: Kluwer, 1997, pp. 383-398.
[PDF file]
Argues that uncomputability is just a species of the problem of induction so that uncomputability should be taken seriously from the ground up in a unified theory of computable inquiry.
(with O. Schulte) ``The Computable Testability of Theories with Uncomputable Predictions'',  Erkenntnis 43: 29-66, 1995, 29-66.