Bayesian analysis, probabilities of accidents and the Monty Hall Problem

In my research methods class today we talked about the difference between Classical and Bayesian analysis. In Classical analysis you use available statistics to make inferences about something, while in Bayesian analysis you use other information to interpret available statistics.

Consider the following silly but clarifying example: Suppose I show you a clear bowl with 5 red balls and 5 blue balls, and suppose I ask you to close your eyes and pull out a ball. What is the probability that you will successfully pull out a red ball? A classical statistician will say, correctly, 50 percent, since 5 of the 10 balls are red. However, a Bayesian may say something different if he knows something about who put the game together. For instance, if the Bayesian knows that I am a jokester and have a history of gluing red balls to bowls, then the Bayesian will say the probability that you can “successfully” pull out a red ball will be much less than 50 percent.

Here is another perhaps more relevant example: Suppose 60 percent of all vehicle accidents involve drivers using cell phones. Can we conclude that there is a greater than 50 percent chance that someone using a cell phone while driving will be in an accident? A Classical thinker may conclude “yes,” since more than half of all accidents involve cell phones. Politicians think this way, too, because they promote laws that restrict our ability to use cell phones while driving using these kinds of numbers. However, a Bayesian will want to consider other information, such as the percent of all drivers in accidents and the percent of cell phone use by drivers not in accidents.

For example, if 5 percent of drivers are in an accident on any given day and if the percent of non-accident drivers who use cell phones while driving is 30, then what is the probability that someone will be in an accident given that they are using a cell phone? It turns out to be a lot less than 60 percent–about 9.5 percent. (The formula is (0.05)(0.6)/[(0.05)(0.6)+(0.95)(0.3)] for anyone who wants to check my math.) Of course, to make the point that one should not use cell phones while driving, we should calculate the probability that someone will be in an accident given that they are not using a cell phone. This is less than 3 percent. (The formula is (0.05)(0.4)/[(0.05)(0.4)+(0.95)(0.7)].) So, using a cell phone while driving almost doubles the chance of being in an accident, while not using a cell phone decreases the likelihood of being in an accident by about 40 percent. Clearly one is better off not using a cell phone while driving. Wikipedia has a useful discussion of the math behind the analysis here.

These numbers are hypothetical. I do not have actual data on the percent of cars in accidents and the percent of drivers using cell phones, etc. The point is that we can obtain a better analysis by considering all relevant information carefully. That is, it is not always correct to draw conclusions from data we have presented to us. Moreover, biases can impair our ability to understand what is going on around us, unless we are careful in how we draw conclusions. We see the wisdom in this from observing how people behave during presidential elections. A person’s bias in favor of a particular candidate seems to make him or her impervious to evidence that the candidate is a lying and immoral buffoon.

This type of analysis is also helpful when considering medical tests. If 2 percent of the population has a disease and the doctor gives you a diagnosis that you have the disease, then what is the probability you really have it given that the doctor said you did? The answer depends on how accurate the medical test is. For example, if the medical test is accurate 95 percent of the time, then the chance you have the disease is only about 30 percent. In contrast, if the medical test is accurate only 80 percent of the time, then the chance you have the disease is really less than 8 percent. In either case, I would get a second opinion.

85-doorsWe had fun with this example in class: Suppose you are on the game show “Let’s Make a Deal.” Monty Hall, the show’s host, shows you three doors, A, B and C. Behind one is a new car, behind the other two are goats. You are asked to pick a door. You pick door A. Monty opens door B to reveal a goat and then offers to allow you to switch to door C or stay with your choice of door A. Should you switch or stay? Someone asked Marilyn vos Savant, a woman listed in the Guinness Book of World Records as having the highest IQ, this question. She gave her answer in 1990 in a Parade magazine column. It generated thousands of letters, many from PhDs saying she was wrong. Her column and responses are here. It’s funny to read the reactions of so-called academics. Answer the “stay or switch” question first before reading her response. To play the game to convince yourself that she was right, see this online app here. Play it many times by staying and see how often you win. Then play it many times by switching each time to see how often you win. You’ll find that the probability of winning the car doubles from one-third to two-thirds by switching. There’s also an official “Let’s Make a Deal” website.

Given the choice between watching “Let’s Make a Deal” and presidential candidates debate, I’ll place my odds on the game show.