Consider a doctor with a knowledge base KB consisting of first-order information (such as ``All patients with hepatitis have jaundice''), statistical information (such as ``80% of patients with jaundice have hepatitis''), and default information (such as ``patients with pneumonia typically have fever''). The doctor may want to make decisions regarding a particular patient, using the KB in some principled way. To do this, it is often useful for the doctor to assign a numerical ``degree of belief'' to measure the strength of her belief in a given statement A. We focus on one principled method for doing so. The method, which we call the random worlds method, is a natural one: For any given domain size N, we can look at the proportion of models satisfying A among models of size N satisfying KB. If we don't know the domain size N, but know that it is large, we can approximate the degree of belief in A given KB by taking the limit of this fraction as N goes to infinity.

In many cases that arise in practice, the answers we get using this method can be shown to match heuristic assumptions made in many standard AI systems, such as preference for more specific information. The approach also provides foundations for nonmonotonic reasoning. We also show that when we restrict the language to unary predicates (for example, symptoms and diseases, but not relations such as "Taller than"), the answer provided by the random worlds method can often be computed using maximum entropy. On the other hand, if the language includes binary predicates, all connections to maximum entropy seem to disappear. Moreover, almost all the questions one might want to ask can be shown to be highly undecidable.

We conclude with some general discussion of the problem of finding reasonable methods to do inductive reasoning of the sort we consider here.

The talk covers joint work with Fahiem Bacchus, Adam Grove and Daphne Koller.

Back to Talks Page