Against the Gods: The Remarkable Story of Risk Read online

Page 29


  Prospect Theory discovered behavior patterns that had never been recognized by proponents of rational decision-making. Kahneman and Tversky ascribe these patterns to two human shortcomings. First, emotion often destroys the self-control that is essential to rational decisionmaking. Second, people are often unable to understand fully what they are dealing with. They experience what psychologists call cognitive difficulties.

  The heart of our difficulty is in sampling. As Leibniz reminded Jacob Bernoulli, nature is so varied and so complex that we have a hard time drawing valid generalizations from what we observe. We use shortcuts that lead us to erroneous perceptions, or we interpret small samples as representative of what larger samples would show.

  Consequently, we tend to resort to more subjective kinds of measurement: Keynes's "degrees of belief' figure more often in our decision-making than Pascal's Triangle, and gut rules even when we think we are using measurement. Seven million people and one elephant!

  We display risk-aversion when we are offered a choice in one setting and then turn into risk-seekers when we are offered the same choice in a different setting. We tend to ignore the common components of a problem and concentrate on each part in isolation-one reason why Markowitz's prescription for portfolio-building was so slow to find acceptance. We have trouble recognizing how much information is enough and how much is too much. We pay excessive attention to low-probability events accompanied by high drama and overlook events that happen in routine fashion. We treat costs and uncompensated losses differently, even though their impact on wealth is identical. We start out with a purely rational decision about how to manage our risks and then extrapolate from what may be only a run of good luck. As a result, we forget about regression to the mean, overstay our positions, and end up in trouble.

  Here is a question that Kahneman and Tversky use to show how intuitive perceptions mislead us. Ask yourself whether the letter K appears more often as the first or as the third letter of English words. You will probably answer that it appears more often as the first letter. Actually, K appears as the third letter twice as often. Why the error? We find it easier to recall words with a certain letter at the beginning than words with that same letter somewhere else.

  The asymmetry between the way we make decisions involving gains and decisions involving losses is one of the most striking findings of Prospect Theory. It is also one of the most useful.

  Where significant sums are involved, most people will reject a fair gamble in favor of a certain gain-$100,000 certain is preferable to a 5050 possibility of $200,000 or nothing. We are risk-averse, in other words.

  But what about losses? Kahneman and Tversky's first paper on Prospect Theory, which appeared in 1979, describes an experiment showing that our choices between negative outcomes are mirror images of our choices between positive outcomes.' In one of their experiments they first asked the subjects to choose between an 80% chance of winning $4,000 and a 20% chance of winning nothing versus a 100% chance of receiving $3,000. Even though the risky choice has a higher mathematical expectation-$3,200-80% of the subjects chose the $3,000 certain. These people were risk-averse, just as Bernoulli would have predicted.

  Then Kahneman and Tversky offered a choice between taking the risk of an 80% chance of losing $4,000 and a 20% chance of breaking even versus a 100% chance of losing $3,000. Now 92% of the respondents chose the gamble, even though its mathematical expectation of a loss of $3,200 was once again larger than the certain loss of $3,000. When the choice involves losses, we are risk-seekers, not risk-averse.

  Kahneman and Tversky and many of their colleagues have found that this asymmetrical pattern appears consistently in a wide variety of experiments. On a later occasion, for example, Kahneman and Tversky proposed the following problem.' Imagine that a rare disease is breaking out in some community and is expected to kill 600 people. Two different programs are available to deal with the threat. If Program A is adopted, 200 people will be saved; if Program B is adopted, there is a 33% probability that everyone will be saved and a 67% probability that no one will be saved.

  Which program would you choose? If most of us are risk-averse, rational people will prefer Plan A's certainty of saving 200 lives over Plan B's gamble, which has the same mathematical expectancy but involves taking the risk of a 67% chance that everyone will die. In the experiment, 72% of the subjects chose the risk-averse response represented by Program A.

  Now consider the identical problem posed differently. If Program C is adopted, 400 of the 600 people will die, while Program D entails a 33% probability that nobody will die and a 67% probability that 600 people will die. Note that the first of the two choices is now expressed in terms of 400 deaths rather than 200 survivors, while the second program offers a 33% chance that no one will die. Kahneman and Tversky report that 78% of their subjects were risk-seekers and opted for the gamble: they could not tolerate the prospect of the sure loss of 400 lives.

  This behavior, although understandable, is inconsistent with the assumptions of rational behavior. The answer to a question should be the same regardless of the setting in which it is posed. Kahneman and Tversky interpret the evidence produced by these experiments as a demonstration that people are not risk-averse: they are perfectly willing to choose a gamble when they consider it appropriate. But if they are not risk-averse, what are they? "The major driving force is loss aversion," writes Tversky (italics added). "It is not so much that people hate uncertainty-but rather, they hate losing."6 Losses will always loom larger than gains. Indeed, losses that go unresolved-such as the loss of a child or a large insurance claim that never gets settled-are likely to provoke intense, irrational, and abiding risk-aversion.?

  Tversky offers an interesting speculation on this curious behavior:

  Probably the most significant and pervasive characteristic of the human pleasure machine is that people are much more sensitive to negative than to positive stimuli.... [T]hink about how well you feel today, and then try to imagine how much better you could feel.... [T]here are a few things that would make you feel better, but the number of things that would make you feel worse is unbounded.8

  One of the insights to emerge from this research is that Bernoulli had it wrong when he declared, "[The] utility resulting from any small increase in wealth will be inversely proportionate to the quantity of goods previously possessed." Bernoulli believed that it is the pre-existing level of wealth that determines the value of a risky opportunity to become richer. Kahneman and Tversky found that the valuation of a risky opportunity appears to depend far more on the reference point from which the possible gain or loss will occur than on the final value of the assets that would result. It is not how rich you are that motivates your decision, but whether that decision will make you richer or poorer. As a consequence, Tversky warns, "our preferences ... can be manipulated by changes in the reference points."9

  He cites a survey in which respondents were asked to choose between a policy of high employment and high inflation and a policy of lower employment and lower inflation. When the issue was framed in terms of an unemployment rate of 10% or 5%, the vote was heavily in favor of accepting more inflation to get the unemployment rate down. When the respondents were asked to choose between a labor force that was 90% employed and a labor force that was 95% employed, low inflation appeared to be more important than raising the percentage employed by five points.

  Richard Thaler has described an experiment that uses starting wealth to illustrate Tversky's warning.10 Thaler proposed to a class of students that they had just won $30 and were now offered the follow ing choice: a coin flip where the individual wins $9 on heads and loses $9 on tails versus no coin flip. Seventy percent of the subjects selected the coin flip. Thaler offered his next class the following options: starting wealth of zero and then a coin flip where the individual wins $39 on heads and wins $21 on tails versus $30 for certain. Only 43 percent selected the coin flip.

  Thaler describes this result as the "house money" effect. Althou
gh the choice of payoffs offered to both classes is identical-regardless of the amount of the starting wealth, the individual will end up with either $39 or $21 versus $30 for sure-people who start out with money in their pockets will choose the gamble, while people who start out with empty pockets will reject the gamble. Bernoulli would have predicted that the decision would be determined by the amounts $39, $30, or $21 whereas the students based their decisions on the reference point, which was $30 in the first case and zero in the second.

  Edward Miller, an economics professor with an interest in behavioral matters, reports a variation on these themes. Although Bernoulli uses the expression "any small increase in wealth," he implies that what he has to say is independent of the size of the increase." Miller cites various psychological studies that show significant differences in response, depending on whether the gain is a big one or a small one. Occasional large gains seem to sustain the interest of investors and gamblers for longer periods of time than consistent small winnings. That response is typical of investors who look on investing as a game and who fail to diversify; diversification is boring. Well-informed investors diversify because they do not believe that investing is a form of entertainment.

  Kahneman and Tversky use the expression "failure of invariance" to describe inconsistent (not necessarily incorrect) choices when the same problem appears in different frames. Invariance means that if A is preferred to B and B is preferred to C, then rational people will prefer A to C; this feature is the core of von Neumann and Morgenstern's approach to utility. Or, in the case above, if 200 lives saved for certain is the rational decision in the first set, saving 200 lives for certain should be the rational decision in the second set as well.

  But research suggests otherwise:

  The failure of invariance is both pervasive and robust. It is as common among sophisticated respondents as among naive ones.... Respondents confronted with their conflicting answers are typically puzzled. Even after rereading the problems, they still wish to be risk averse in the "lives saved" version; they will be risk seeking in the "lives lost" version; and they also wish to obey invariance and give consistent answers to the two versions....

  The moral of these results is disturbing. Invariance is normatively essential [what we should do], intuitively compelling, and psychologically unfeasible.12

  The failure of invariance is far more prevalent than most of us realize. The manner in which questions are framed in advertising may persuade people to buy something despite negative consequences that, in a different frame, might persuade them to refrain from buying. Public opinion polls often produce contradictory results when the same question is given different twists.

  Kahneman and Tversky describe a situation in which doctors were concerned that they might be influencing patients who had to choose between the life-or-death risks in different forms of treatment.13 The choice was between radiation and surgery in the treatment of lung cancer. Medical data at this hospital showed that no patients die during radiation but have a shorter life expectancy than patients who survive the risk of surgery; the overall difference in life expectancy was not great enough to provide a clear choice between the two forms of treatment. When the question was put in terms of risk of death during treatment, more than 40% of the choices favored radiation. When the question was put in terms of life expectancy, only about 20% favored radiation.

  One of the most familiar manifestations of the failure of invariance is in the old Wall Street saw, "You never get poor by taking a profit." It would follow that cutting your losses is also a good idea, but investors hate to take losses, because, tax considerations aside, a loss taken is an acknowledgment of error. Loss-aversion combined with ego leads investors to gamble by clinging to their mistakes in the fond hope that some day the market will vindicate their judgment and make them whole. Von Neumann would not approve.

  The failure of invariance frequently takes the form of what is known as "mental accounting," a process in which we separate the components of the total picture. In so doing we fail to recognize that a decision affecting each component will have an effect on the shape of the whole. Mental accounting is like focusing on the hole instead of the doughnut. It leads to conflicting answers to the same question.

  Kahneman and Tversky ask you to imagine that you are on your way to see a Broadway play for which you have bought a ticket that cost $40.14 When you arrive at the theater, you discover you have lost your ticket. Would you lay out $40 for another one?

  Now suppose instead that you plan to buy the ticket when you arrive at the theater. As you step up to the box office, you find that you have $40 less in your pocket than you thought you had when you left home. Would you still buy the ticket?

  In both cases, whether you lost the ticket or lost the $40, you would be out a total of $80 if you decided to see the show. You would be out only $40 if you abandoned the show and went home. Kahneman and Tversky found that most people would be reluctant to spend $40 to replace the lost ticket, while about the same number would be perfectly willing to lay out a second $40 to buy the ticket even though they had lost the original $40.

  This is a clear case of the failure of invariance. If $80 is more than you want to spend on the theater, you should neither replace the ticket in the first instance nor buy the ticket in the second. If, on the other hand, you are willing to spend $80 on going to the theater, you should be just as willing to replace the lost ticket as you are to spend $40 on the ticket despite the disappearance of the original $40. There is no difference other than in accounting conventions between a cost and a loss.

  Prospect Theory suggests that the inconsistent responses to these choices result from two separate mental accounts, one for going to the theater, and one for putting the $40 to other uses-next month's lunch money, for example. The theater account was charged $40 when the ticket was purchased, depleting that account. The lost $40 was charged to next month's lunch money, which has nothing to do with the theater account and is off in the future anyway. Consequently, the theater account is still awaiting its $40 charge.

  Thaler recounts an amusing real-life example of mental accounting.15 A professor of finance he knows has a clever strategy to help him deal with minor misfortunes. At the beginning of the year, the professor plans for a generous donation to his favorite charity. Anything untoward that happens in the course of the year-a speeding ticket, replacing a lost possession, an unwanted touch by an impecunious relative-is then charged to the charity account. The system makes the losses painless, because the charity does the paying. The charity receives whatever is left over in the account. Thaler has nominated his friend as the world's first Certified Mental Accountant.

  In an interview with a magazine reporter, Kahneman himself confessed that he had succumbed to mental accounting. In his research with Tversky he had found that a loss is less painful when it is just an addition to a larger loss than when it is a free-standing loss: losing a second $100 after having already lost $100 is less painful than losing $100 on totally separate occasions. Keeping this concept in mind when moving into a new home, Kahneman and his wife bought all their furniture within a week after buying the house. If they had looked at the furniture as a separate account, they might have balked at the cost and ended up buying fewer pieces than they needed.16

  We tend to believe that information is a necessary ingredient to rational decision-making and that the more information we have, the better we can manage the risks we face. Yet psychologists report circumstances in which additional information gets in the way and distorts decisions, leading to failures of invariance and offering opportunities for people in authority to manipulate the kinds of risk that people are willing to take.

  Two medical researchers, David Redelmeier and Eldar Shafir, reported in the Journal of the American Medical Association on a study designed to reveal how doctors respond as the number of possible options for treatment is increased." Any medical decision is risky-no one can know for certain what the consequences will be. In eac
h of Redelmeier and Shafir's experiments, the introduction of additional options raised the probability that the physicians would choose either the original option or decide to do nothing.

  In one experiment, several hundred physicians were asked to prescribe treatment for a 67-year-old man with chronic pain in his right hip. The doctors were given two choices: to prescribe a named medication or to "refer to orthopedics and do not start any new medication"; just about half voted against any medication. When the number of choices was raised from two to three by adding a second medication option, along with "refer to orthopedics," three-quarters of the doctors voted against medication and for "refer to orthopedics."

  Tversky believes that "probability judgments are attached not to events but to descriptions of events ... the judged probability of an event depends upon the explicitness of its description."18 As a case in point, he describes an experiment in which 120 Stanford graduates were asked to assess the likelihood of various possible causes of death. Each student evaluated one of two different lists of causes; the first listed specific causes of death and the second grouped the causes under a generic heading like "natural causes."

  The following table shows some of the estimated probabilities of death developed in this experiment:

  These students vastly overestimated the probabilities of violent deaths and underestimated deaths from natural causes. But the striking revelation in the table is that the estimated probability of dying under either set of circumstances was higher when the circumstances were explicit as compared with the cases where the students were asked to estimate only the total from natural or unnatural causes.

  In another medical study described by Redelmeier and Tversky, two groups of physicians at Stanford University were surveyed for their diagnosis of a woman experiencing severe abdominal pain.19 After receiving a detailed description of the symptoms, the first group was asked to decide on the probability that this woman was suffering from ectopic pregnancy, a gastroenteritis problem, or "none of the above." The second group was offered three additional possible diagnoses along with the choices of pregnancy, gastroenteritis, and "none of the above" that had been offered to the first group.