Why You’ll Soon Be Playing Mega Trillions: Neglect of Probability
Two games of chance: In the first, you can win $10 million, and in the second, $10,000. Which do you play? If you win the first game, it changes your life completely: You can quit your job, tell your boss where to go, and live off the winnings. If you hit the jackpot in the second game, you can take a nice vacation in the Caribbean, but you’ll be back at your desk quick enough to see your postcard arrive. The probability of winning is one in 100 million in the first game, and one in 10,000 in the second game. So which do you choose?
Our emotions draw us to the first game, even though the second is ten times better, objectively considered (expected win times probability). Therefore, the trend is toward ever-larger jackpots—Mega Millions, Mega Billions, Mega Trillions—no matter how small the odds are.
In a classic experiment from 1972, participants were divided into two groups. The members of the first group were told that they would receive a small electric shock. In the second group, subjects were told that the risk of this happening was only 50 percent. The researchers measured physical anxiety (heart rate, nervousness, sweating, etc.) shortly before commencing. The result were, well, shocking: There was absolutely no difference. Participants in both groups were equally stressed. Next, the researchers announced a series of reductions in the probability of a shock for the second group: from 50 percent to 20 percent, then 10 percent, then 5 percent. The result: still no difference! However, when they declared they would increase the strength of the expected current, both groups’ anxiety levels rose—again, by the same degree. This illustrates that we respond to the expected magnitude of an event (the size of the jackpot or the amount of electricity), but not to its likelihood. In other words: We lack an intuitive grasp of probability.
The proper term for this is neglect of probability, and it leads to errors in decision making. We invest in start-ups because the potential profit makes dollar signs flash before our eyes, but we forget (or are too lazy) to investigate the slim chances of new businesses actually achieving such growth. Similarly, following extensive media coverage of a plane crash, we cancel flights without really considering the minuscule probability of crashing (which, of course, remains the same before and after such a disaster). Many amateur investors compare their investments solely on the basis of yield. For them, Google shares with a return of 20 percent must be twice as good as property that returns 10 percent. That’s wrong. It would be a lot smarter to also consider both investments’ risks. But then again, we have no natural feel for this, so we often turn a blind eye to it.
Back to the experiment with the electric shocks: In group B, the probability of getting a jolt was further reduced: from 5 percent to 4 percent to 3 percent. Only when the probability reached zero did group B respond differently than group A. To us, 0 percent risk seems infinitely better than a (highly improbable) 1 percent risk.
To test this, let’s examine two methods of treating drinking water. Suppose a river has two equally large tributaries. One is treated using method A, which reduces the risk of dying from contaminated water from 5 percent to 2 percent. The other is treated using method B, which reduces the risk from 1 percent to 0 percent, that is, the threat is completely eliminated. So, method A or B? If you think like most people, you will opt for method B—which is silly because with measure A, 3 percent fewer people die, and with B, just 1 percent fewer. Method A is three times as good! This fallacy is called the “zero-risk bias.”
A classic example of this is the U.S. Food Act of 1958, which prohibits food that contains cancer-causing substances. Instituted to achieve zero risk of cancer, this ban sounds good at first, but it ended up leading to the use of more dangerous (but noncarcinogenic) food additives. It is also absurd: As Paracelsus illustrated in the sixteenth century, poisoning is always a question of dosage. Furthermore, this law can never be enforced properly since it is impossible to remove the last “banned” molecule from food. Each farm would have to function like a hyper-sterile computer-chip factory, and the cost of food would increase a hundredfold. Economically, zero risk rarely makes sense. One exception is when the consequences are colossal, such as a deadly, highly contagious virus escaping from a biotech laboratory.
We have no intuitive grasp of risk and thus distinguish poorly among different threats. The more serious the threat and the more emotional the topic (such as radioactivity), the less reassuring a reduction in risk seems to us. Two researchers at the University of Chicago have shown that people are equally afraid of a 99 percent chance as they are of a 1 percent chance of contamination by toxic chemicals. An irrational response, but a common one.
* Source: The Art of Thinking Clearly by Rolf Dobelli