80-110: The Nature of Mathematical Reasoning

Spring 1998

Class 8: 2/12/98

1. Lets Make a Deal - Rigorously

To review, here are the questions being asked:

Suppose you are on a game show in which there are 3 doors, one of which will contain a terrific prize. The prize is placed behind a door randomly, which means:

P(Prize placed behind door 1), abbreviated P(1), = P(2) = P(3) = 1/3. You are asked to pick a door, but you are not shown what is behind your door. Suppose you pick door 1. Your host (Monty Hall), who knows where the prize has been placed, then shows you one of the doors you did not pick, the only restriction being that Monte must show you a door that is empty. Say you are shown that door 2 does not have the prize. Assuming that you want to pick the door that has the highest chance (probability) of having the prize behind it, having seen that door 2 is empty, do you want to stay with door 1 or switch to door 3?

Another, more general question concerns optimal strategies: Q2) among strategies that either always stay with the original guess, or ones which always switch, which is better, or are they the same?

To answer these questions, we will use the mathematical theory of probability. The theory can be stated with 4 simple axioms. From these axioms we can derive two simple theorems, and we can apply these theorems to the Lets Make a Deal problem to rigorously derive the correct answers deductively.

The Theory of Probability

The theory of probability can be done for propositions or for events. I offer both in parallel.

Events: Assume that we have a universe, or set W of all the objects under consideration. For example, the 52 cards in a legal deck might be our universe. Assume further that we have a "field" F of subsets of W that is closed under union, intersection, and complementation.

Propostions: Assume that we a have a vacuously true proposition T (this is like the universal set W .) Assume that if propositions A,B made sense, then so do the propositions (A & B), (A or B), and ~A. & is like intersection, or like union, and ~ (negation) like complementation.

Axioms

1. P(T) = P(W ) = 1.

2. For any event or proposition A, P(A) ³ 1.

3. If A Ç B = Ø, (A and B are incompatible propositions), then P(A or B) = P(A U B) = P(A) + P(B).

The final axiom is sometimes considered a definition, and it involves conditional probability, abbreviated cp. Notationally, P(A | B), referred to as the probability of A conditional on B, means the probability of A given that B occurred.

4. P(A | B) = , provided P(B) > 0.

 

 

 

 

 

 

 

 

 

 

 

 

 

Bayes' Theorem

 

Proof:

1) (definition of cp)

2) (definition of cp)

3) (line 2, algebra)

4) (substitute right side of 3 into line 1)

 

 

The Law of Total Probability:

P(A) = P(A | B)P(B) + P(A | ~B)P(~B).

Proof:

P(A | B)P(B) + P(A | ~B)P(~B) = (def. of cp)

= P(A & B) + P(A & ~B) (cancel)

= P((A & B) or (A & ~B)) (axiom 5)

= P(A). (logic)

 

 

Now we are finally in a position to solve the Lets Make a Deal puzzle rigorously. Argument 4 is meant to answer Q1, which is different than Q2.

Argument 4: Having picked door 1 originally and been shown door 2 empty, what we really want to compare is two conditional, or "mid-game" probabilities:

Stay: P( 1 | Sh2 )

Switch: P( 3 | Sh2 )

We can use Bayes Theorem to solve for these two conditional probabilities.

1) Stay: P(1 | Sh 2) = P(Sh 2 | 1) P( 1) / P(Sh 2 )

2) Switch: P( 3 | Sh 2) = P(Sh 2 | 3) P( 3) / P(Sh 2 )

The stay option is strictly better just in case the ratio of 1 to 2 is strictly greater than 1. When we divide 1 by 2, the P(Sh 2) cancels, leaving:

3) = P(Sh 2 | 1) P( 1)

P(Sh 2 | 3) P( 3)

P( 1) and P( 3) refer to the pre-game chances, so they both equal 1/3, so this allows us to reduce 3 to:

4) = P(Sh 2 | 1)

P(Sh 2 | 3)

If the prize is in door 3 then the host is forced to show us 2, so: P(Sh 2 | 3) = 1. The numerator, P(Sh 2 | 1), tells us Monte’s rule for how often to show us door 2 when he has the choice, and this is where all the action is in answering Q1. This is called the "announcing rule." The first thing to notice is that the ratio in equation 3 exceeds 1 just in case: P(Sh 2 | 1) is more than 1, which is impossible because probabilities cannot exceed 1. So the stay option is never strictly better than the switch option, but it is just as good if P(Sh 2 | 1) = 1. That is, if the game show host picks door 2 to show you whenever he has the choice of doors 2 or 3 to show you, then there is no difference between staying and switching. If, when he has a choice about which empty door he can show you, the game show host picks door 2 or 3 randomly and equally often, i.e., P(Sh 2 | 1) = P(Sh 3 | 1) = .5, then the ratio above is .5/1, which means you double chances of winning by switching. If Monte picks door 3 whenever he has a choice, then P(Sh 2 | 1) = 0, and it is infinitely more likely to win by switching than it is to stay when you started with door 1 and were shown door 2 empty.

-------------

This argument is valid, and quite a bit more rigorous than those that preceded it. It considers, however, the specific situation in which you have picked door 1 originally and been shown door 2 empty. It makes it seem as if we can do as well by staying when Monte’s showing rule is informative in a certain way, i.e., when P(Sh 2 | 1) = 1. What the analysis does not reveal, however, is that if we believe Monte’s showing rule is more informative than random when we see door 2 after having picked 1, then it is correspondingly as uninformative when we see door 3 after having picked 1.

Argument 5: Using "1" to mean: the prize is in door 1, and "Sh1" to mean, you were shown door 1, and given you have chosen door 1 to begin the game, the proper general formulas using the law of total probability for considering whether it is better to switch or stay are as follows:

5) P(Stay & Win) = P( 1 | Sh 2) P(Sh2) + P( 1 | ~Sh 2) P(~Sh 2)

~Sh2 means that we are not shown door 2. This is the same as being shown door 1 or door 3. So 5 becomes:

6) P(Stay & Win) = P( 1 | Sh 2) P(Sh2) + P( 1 | Sh 3) P(Sh 3) + P(1 | Sh 1) P(Sh 1)

The prize cannot be behind door 1 if we are shown 1, so in the third term, P(1 | Sh 1) =0, so 6 becomes:

7) P(Stay & Win) = P( 1 | Sh 2) P(Sh2) + P( 1 | Sh 3) P(Sh 3)

Applying Bayes Rule, we get:

8) =

Canceling P(sh2) in the left term and P(sh3) in the right, we get

9) = [P(Sh 2 | 1) P(1) ] + [P(Sh 3 | 1) P(1)]

Factoring out P(1), we get:

10) = P(1) [P(Sh 2 | 1) + P(Sh 3 | 1)]

Because Monte must show us either door 2 or 3, P(Sh 2 | 1) + P(Sh 3 | 1) = 1, and this sum reflects the comment above about how no matter how informative being shown door 2 is (P(Sh2 | 1), it is that uninformative being shown door 3 (P( Sh3 | 1), because these two must sum to exactly 1.

Now we can fill in numbers,

11) P(Stay & Win) = 1/3 [1] = 1/3

Since P(Switch and Win) + P(Stay & Win) = 1, P(Switch and Win) = 2/3.

So no matter what Monte’s announcing rule when he has a choice, the overall probability of winning when we stay with door 1 is 2/3, and the overall probability of winning when we switch away from door 1 is 1/3. Only the numbering changes in this argument when we switch our oringinal choice from door 1 to door 2 or door 3, so the argument is general.

------------------------------

 

 

 

 

 

 

 

 

 

The Causal Story

Consider the following causal diagram of the Lets Make a Deal Problem.

Figure 1

Your original choice (C) has no causal connection with where the prize is (P), but the door you are shown as empty (S) depends on what you chose and on where the prize is. The diagram is an instance of two independent causes with a common effect, and in general, independent causes are usually made dependent once we condition on a common effect. Consider the causal structure in Figure 2 below, which is another instance of independent causes with a common effect:

Figure 2

In this example, knowing the status of the gas tank tells you nothing about the battery, and vice versa. That is what we mean by independent causes. But knowing that the car won’t start, i.e., conditioning on the common effect, if you are then told that your gas tank isn’t empty, then you know quite alot about the battery (it must be dead). Thus independent causes became dependent conditional on their common effect.

Translating this into the Lets Make a Deal diagram in Figure 1, you can see why your original choice tells you nothing about where the prize is, but after you are shown what door is empty, then knowing your choice does change the probabilities of where the prize is, albeit not completely to 0 or 1 like in the car example. How exactly does it change the probabilities? For that we have to go a step further. I include it for the curious, but you will definitely not be tested on it.

We need to first associate a full joint probability distribution with this causal diagram, and then use some rules for making inferences about conditional probabilities based on the diagram and the distribution associated with the diagram.

First we need to associate a full joint probability distribution with this causal diagram, and we can do so quite easily. We need only assume what is called the Causal Markov Condition (Spirtes, Glymour and Scheines, 1993), which holds roughly that a variable is independent of all other variables (except its effects), conditional on its immediate causes. Although the whole story is somewhat more complicated, this means that to specify the whole joint distribution, it is sufficient to specify the probabilities of C unconditionally (since it has no immediate causes), the probabilites of P unconditionally, and the probabilities of S conditional on C and P. Below we construct such a table.

Probability Table for Lets Make Deal Diagram

P(C= door 1) = 1/3 P(P = door 1) = 1/3

P(C= door 2) = 1/3 P(P = door 2) = 1/3

P(C= door 3) = 1/3 P(P = door 3) = 1/3

P(S = door 1 | C=1 & P=1) = 0.0 P(S = door 1 | C=1 & P=2) = 0.0

P(S = door 2 | C=1 & P=1) = A P(S = door 2 | C=1 & P=2) = 0.0

P(S = door 3 | C=1 & P=1) = 1-A P(S = door 3 | C=1 & P=2) = 1.0

P(S = door 1 | C=2 & P=2) = B P(S = door 1 | C=1 & P=3) = 0.0

P(S = door 2 | C=2 & P=2) = 0.0 P(S = door 2 | C=1 & P=3) = 1.0

P(S = door 3 | C=2 & P=2) = 1-B P(S = door 3 | C=1 & P=3) = 0.0

P(S = door 1 | C=3 & P=3) = C P(S = door 1 | C=2 & P=1) = 0.0

P(S = door 2 | C=3 & P=3) = 1-C P(S = door 2 | C=2 & P=1) = 0.0

P(S = door 3 | C=3 & P=3) = 0.0 P(S = door 3 | C=2 & P=1) = 1.0

P(S = door 1 | C=3 & P=1) = 0.0 P(S = door 1 | C=2 & P=3) = 1.0

P(S = door 2 | C=3 & P=1) = 1.0 P(S = door 2 | C=2 & P=3) = 0.0

P(S = door 3 | C=3 & P=1) = 0.0 P(S = door 3 | C=2 & P=3) = 0.0

P(S = door 1 | C=3 & P=2) = 1.0

P(S = door 2 | C=3 & P=2) = 0.0

P(S = door 3 | C=3 & P=2) 0.0

We assume that our choice of door is random, and that the assignment of the prize is also random, and that these two variables are independent. So the first six lines of the table give what are called the "marginal" distributions for C and P respectively. The remaining 27 entries in the table give what is called the "conditional" distribution of S given C and P. Notice that, in virtue of how the game constrains our host, every entry is either 0 or 1 except for the 6 which involve A, B, or C. These rows represent the cases in which Monte has a choice of what door to show you, and A, B, and C must be between 0 and 1 inclusive.

What we want is the probability of P given C and S, i.e., P(P | C,S) but there is no entry in the table of this form. Judea Pearl (1988) figured out a simple way to convert the probabilities in the table above into this form with general rules that are correct and efficient for every causal diagram and its accompanying probability distribution that you might enter. These rules are called "updating," and I won’t go into them here (thank goodness, eh?). To take one simple case, suppose you wanted to know P(P=1 | C=1 & S=2). That is, supposing you had chosen door 1 originally (C=1), and you were shown door 2 empty (S=2), what is the proability that the prize is in door 1 (P=1). This turns out to equal A/(A+1). If, for example, A = .5, then the probability of winning if you stay = .5/1.5 = 1/3.

 

References

Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems. Morgan and Kaufman, San Mateo.

Scheines, R., Spirtes, P., Glymour, C., and Meek, C. (1994). TETRAD II: Users Manual, Lawrence Erlbaum Associates, Hillsdale, NJ.

Spirtes, P., Glymour, C., & Scheines, R. (1993). Causation, prediction, and search. Springer-Verlag Lecture Notes in Statistics, v. 81, Springer-Verlag, Dordrecht, Holland.


Back to Lecture Notes Index