How Large is your Penis?

Monday, May 31, 2010

Monty Hall Paradox

The Monty Hall problem comes from a game show. There are three doors one of which has the prize. The contestant picks one of the doors and then a wrong door is opened thereby leaving the contestant with two doors. The contestant now is asked if he is happy to stick with his door or would he rather switch. The problem asks if it better for the contestant to stay with what he has, switch, or it does not matter at all if he switches or stays?

The reason why this problem is called a "paradox" is because it is counter intuitive to most people. Most people say that it does not matter which door he chooses because there are two doors. Thus, they falsely conclude, it must be 50% chance of being right on any one of the two doors. When I first heard this problem I think I was 16, I thought about it for a little bit and concluded that the switching increases your chances by 100%. So definitely one must switch in such a game! I did not find this problem to be challenging. And I still do not understand why people today are still so confused by it. Here are some of my explanations that I came up with. Some of these were already observed by other people, but some I never heard anyone else remark before. Hopefully, you would find one of my many explanations behind the Monty Hall Problem to be satisfying if this problem every confused you.

Empirical Method: Here is what I think is the simplest way to see that the probability of winning by playing the "no-switch strategy" is 1/3 and the probability of winning by playing the "switch strategy" is 2/3. This is the simplest way because it does not involve any deductive arguments or probability theory, only by doing basic experiments. There is something called the "empirical probability" and the "theoretical probability". The empirical probability is the probability that is attained by experiments. We simply compute our successes divided by our total experiments. This ratio is the empirical probability. The theoretical probability is the probability attained by probability theory, it is done mathematically without the use of any experiments. For example, suppose we would like to determine the probability of throwing a seven with a pair of dice. The theoretical way to do this is to notice that there are 6 times 6 = 36 choices for a pair (x,y) where x is between 1 and 6 for the first die and y is the between 1 and 6 for the second die. To get a seven we need x+y=7. There are exactly 6 such candidates (1,6),(2,5),(3,4),(4,3),(5,2),(6,1). Therefore, the probability of tossing a seven is 6/36 = 1/6. We computed the probability in a theoretical manner. The empirical way of doing this is by just throwing dice and counting the number of times you thrown them and counting the number of times you got a seven. Take the number of times you got a seven divide by the total number of experiments and that is your empirical probability. The Strong Law of Large Numbers says that the empirical probability converges to the theoretical probability. Thus, the more and more you do these experiments the better your estimate of the actual probability will be. If you do this experiment 600 times you are expected to get a seven about 100 times. Maybe 101 or 102, or perhaps 98 or 99. But it will be really close to 1/6. The empirical method is sometimes of an advantage over the theoretical method because at times the theoretical probability can be hard to compute (in particular, what is the probability of taking five dice throwing them then throwing them again to the same arrangement of numbers? This is a much more complicated probability question, it can be computed, but it is a little involved). To do the Monty Hall experiment find a friend. Ask him to take two red aces and an ace of spades. Mix them up and put the three cards face down on the table. The winner in this case would be the ace of spades. First do the switch-strategy. Pick one ace, your friend would turn over another card which is wrong, and then you would switch your pick to the other ace. Mark your successes and total attempts to compute your empirical probability. You will realize it is close to 2/3. Then do the experiment again with a no-switch-strategy. You pick an ace, your friend turns over a card which is wrong, but you do not switch your original pick. Mark your successes and total attempts to compute your empirical probability. You will realize it is close to 1/3. You should now be able to conclude that it is a good idea to switch in the Monty Hall game because it increases your chances by 100%.

Self Experiment Method: If you are uninterested in finding someone to help you with this experiment, as it was in my case, this is an experiment you can do by yourself. Again take three aces, two red, and one ace of spades. Mix them up and put them face down on the table. Pick an ace. Now flip over one of the two remaining aces. But there is a problem. When you have a partner to help you out he will always flip over the red ace since he knows where is the ace of spades. Since you have no idea which ace is where you might on occasions flip over the ace of spades by accident. But this is not a problem! If you are playing with the switch-strategy and you flipped over the ace of spades by accident then you win! Because if you had a partner he would have flipped over the other ace and you would have switched over to the ace of spades anyway. Thus, flipping over an ace of spades is a success in the switch-strategy. And if you are playing with the no-switch-strategy and you flipped over the ace of spades by accident then you lose! Because if you had a partner he would have flipped over the other ace and you would have stayed on your original pick, but since the last remaining card is the ace of spades it means you lose. This is a way to compute the empirical probability for the Monty Hall problem by yourself. The steps are the following to summarize. If you are playing the no-switch-strategy flip an ace over and stay with your pick, but if you flip over the ace of spades then you automatically lose. If you are playing the switch-strategy all you have to do is pick an ace and flip one over, if you flipped over an ace of spades, then you win. Now compute your successes over your total attempts to get the empirical probability. You will again realize that you have about 1/3 for no-switch and 2/3 for switch-strategy.

Common Sense Method: Here is what I think is the easiest way to see why the probability of the switch-method is 2/3 and no-switch-method is 1/3. This uses no probability theory just basic common sense. Let us consider the switch-strategy. Under the switch-strategy if you pick the wrong door then you win. Why? Very simple. If you pick the wrong door then among the two remaining doors the wrong door will be opened. Thereby leaving you with a wrong door and a right door. If you switch you must be switching to the right door. Thus, you win. This means that under the switch-strategy if you pick the wrong door you win. There are two wrong doors out of three doors, therefore your chances are 2/3 to win. Now suppose you use the no-switch strategy. If you pick the right door that can be your only way to win. Because you are not switching you are sticking with what you pick. Thus, your chances of winning is the same chances are picking the right door initially, which is 1/3.

Probability Argument: Here is a method that uses very simple probability theory. The sum of the probability of each door is equal to 1. For definiteness say you pick door one and door two is flipped over. The probability that door one is correct is 1/3 (since there are three doors and only one door is correct). The probability that door two is correct is 0 because it was revealed to us to be wrong. Therefore, the probability of the last remaining door must now be equal to 2/3 because 1/3 + 0 + 2/3 = 1, the sum of the probabilities must be equal to 1. Therefore, by switching you are switching to a door which has probability 2/3 of being correct.

Exaggerated Problem: People who still foolishly insist that there are two doors therefore the probability must be 1/2 should consider the following exaggerated example. Assume there are a 100 doors. One of which is correct. You pick one and 98 of the doors are flipped over showing that they are wrong doors. Now you are left with two doors. Can you really say now that the probability is still 1/2 just because there are two doors? You have to be foolish to propose something like that now. If you followed the arguments from above you should realize that the probability of being right by switching is now 99/100. Just use the probability argument again. The sum of the probabilities of each door has to add up to 1. If you pick one door and 98 are shown to be wrong then you picked a door with probability 1/100 so the other 98 doors have probability 0, which forces the last remaining door to have probability 99/100. You can exaggerate this even move with a million doors. It should be clear now that the fact that two doors remain is irrelevant to saying that the probability of each door is equal to 1/2. It would be 1/2 only to an outsider who has no knowledge of what happened before, but to the contestant it is increased to 99/100.

Reverse Problem: A case in which it is better to use a no-switch-strategy would be in the reverse Monty Hall problem. There are three doors and two are winning doors but one is empty. If you use the switch-strategy then your chances of winning are 1/3. If you use the no-switch-strategy then your chances of winning are 2/3. The derivation of these numbers is based exactly on the same ideas already developed in the previous paragraphs so you should see why this is so.

An interesting application of the Monty Hall problem is to increase our belief whenever there is uncertainty. If there are various choices or possibilities that I want to take but I do not know which one would result in the correct choice then I can apply the ideas of Monty Hall to increase my level of belief. Say there are a number of choices and I pick one. Then in the future it is revealed that some of these choices are false. In the manner of Monty Hall I should switch over from my former choice to one of the remaining choices. This will increase my chances of being right and so increase my level of belief. To illustrate what I am saying let us consider the game Who Wants to be a Millionaire. A question comes up and you have no idea what the right answer is. A strategy to use is to pick one of the four choices, then use a 50-50 to eliminate two choices. If your choice is still up there then switch over to the other choice. This will increase your chances by 200%! From 1/4 to 3/4.

I hope that all of these explanations really make it clear that there is no mystery to the Monty Hall problem. It really is a straightforward and easy problem if you think about it correctly.

1 comment:

  1. I prefer to explain this problem using what you call the Common Sense method. It seems to help if you emphasize to people that it will be the same if they precommit to what they will do. Then if they work out the general result for switching or not it becomes more clear. I think that ability to choose in the middle makes it harder to see what is going on.

    ReplyDelete