Against the Gods: The Remarkable Story of Risk Page 30
The interesting feature of this experiment was the handling of the "none of the above" option by the second group of doctors. Assuming that the average competence of the doctors in each group was essentially equal, one would expect that that option as presented to the first group would have included the three additional diagnoses with which the second group was presented. In that case, the second group would be expected to assign a probability to the three additional diagnoses plus "none of the above" that was approximately equal to the 50% probability assigned to "none of the above" by the first group.
That is not what happened. The second group of doctors assigned a 69% probability to "none of the above" plus the three additional diagnoses and only 31% to the possibility of pregnancy or gastroenteritisto which the first group had assigned a 50% probability. Apparently, the greater the number of possibilities, the higher the probabilities assigned to them.
Daniel Ellsberg (the same Ellsberg as the Ellsberg of the Pentagon Papers) published a paper back in 1961 in which he defined a phenomenon he called "ambiguity aversion."20 Ambiguity aversion means that people prefer to take risks on the basis of known rather than unknown probabilities. Information matters, in other words. For example, Ellsberg offered several groups of people a chance to bet on drawing either a red ball or a black ball from two different urns, each holding 100 balls. Urn 1 held 50 balls of each color; the breakdown in Urn 2 was unknown. Probability theory would suggest that Urn 2 was also split 50-50, for there was no basis for any other distribution. Yet the overwhelming preponderance of the respondents chose to bet on the draw from Urn 1.
Tversky and another colleague, Craig Fox, explored ambiguity aversion more deeply and discovered that matters are more complicated than Ellsberg suggested.21 They designed a series of experiments to discover whether people's preference for clear over vague probabilities appears in all instances or only in games of chance.
The answer came back loud and clear: people will bet on vague beliefs in situations where they feel especially competent or knowledgeable, but they prefer to bet on chance when they do not. Tversky and Fox concluded that ambiguity aversion "is driven by the feeling of incompetence ... [and] will be present when subjects evaluate clear and vague prospects jointly, but it will greatly diminish or disappear when they evaluate each prospect in isolation."22
People who play dart games, for example, would rather play darts than games of chance, although the probability of success at darts is vague while the probability of success at games of chance is mathematically predetermined. People knowledgeable about politics and ignorant about football prefer betting on political events to betting on games of chance set at the same odds, but they will choose games of chance over sports events under the same conditions.
In a 1992 paper that summarized advances in Prospect Theory, Kahneman and Tversky made the following observation: "Theories of choice are at best approximate and incomplete ... Choice is a constructive and contingent process. When faced with a complex problem, people ... use computational shortcuts and editing operations."23 The evidence in this chapter, which summarizes only a tiny sample of a huge body of literature, reveals repeated patterns of irrationality, inconsistency, and incompetence in the ways human beings arrive at decisions and choices when faced with uncertainty.
Must we then abandon the theories of Bernoulli, Bentham, Jevons, and von Neumann? No. There is no reason to conclude that the frequent absence of rationality, as originally defined, must yield the point to Macbeth that life is a story told by an idiot.
The judgment of humanity implicit in Prospect Theory is not necessarily a pessimistic one. Kahneman and Tversky take issue with the assumption that "only rational behavior can survive in a competitive environment, and the fear that any treatment that abandons rationality will be chaotic and intractable." Instead, they report that most people can survive in a competitive environment even while succumbing to the quirks that make their behavior less than rational by Bernoulli's standards. "[P]erhaps more important," Tversky and Kahneman suggest, "the evidence indicates that human choices are orderly, although not always rational in the traditional sense of the word."24 Thaler adds: "Quasi-rationality is neither fatal nor immediately self-defeating."25 Since orderly decisions are predictable, there is no basis for the argument that behavior is going to be random and erratic merely because it fails to provide a perfect match with rigid theoretical assumptions.
Thaler makes the same point in another context. If we were always rational in making decisions, we would not need the elaborate mechanisms we employ to bolster our self-control, ranging all the way from dieting resorts, to having our income taxes withheld, to betting a few bucks on the horses but not to the point where we need to take out a second mortgage. We accept the certain loss we incur when buying insurance, which is an explicit recognition of uncertainty. We employ those mechanisms, and they work. Few people end up in either the poorhouse or the nuthouse as a result of their own decision-making.
Still, the true believers in rational behavior raise another question. With so much of this damaging evidence generated in psychology laboratories, in experiments with young students, in hypothetical situations where the penalties for error are minimal, how can we have any confidence that the findings are realistic, reliable, or relevant to the way people behave when they have to make decisions?
The question is an important one. There is a sharp contrast between generalizations based on theory and generalizations based on experiments. De Moivre first conceived of the bell curve by writing equations on a piece of paper, not, like Quetelet, by measuring the dimensions of soldiers. But Galton conceived of regression to the mean-a powerful concept that makes the bell curve operational in many instances-by studying sweetpeas and generational change in human beings; he came up with the theory after looking at the facts.
Alvin Roth, an expert on experimental economics, has observed that Nicholas Bernoulli conducted the first known psychological experiment more than 250 years ago: he proposed the coin-tossing game between Peter and Paul that guided his uncle Daniel to the discovery of utility.26 Experiments conducted by von Neumann and Morgenstern led them to conclude that the results "are not so good as might be hoped, but their general direction is correct."'-' The progression from experiment to theory has a distinguished and respectable history.
It is not easy to design experiments that overcome the artificiality of the classroom and the tendency of respondents to lie or to harbor disruptive biases-especially when they have little at stake. But we must be impressed by the remarkable consistency evident in the wide variety of experiments that tested the hypothesis of rational choice. Experimental research has developed into a high art.*
Studies of investor behavior in the capital markets reveal that most of what Kahneman and Tversky and their associates hypothesized in the laboratory is played out by the behavior of investors who produce the avalanche of numbers that fill the financial pages of the daily paper. Far away from laboratory of the classroom, this empirical research confirms a great deal of what experimental methods have suggested about decisionmaking, not just among investors, but among human beings in general.
As we shall see, the analysis will raise another question, a tantalizing one. If people are so dumb, how come more of us smart people don't get rich?
nvestors must expect to lose occasionally on the risks they take. Any other assumption would be foolish. But theory predicts that the expectations of rational investors will be unbiased, to use the technical expression: a rational investor will overestimate part of the time and underestimate part of the time but will not overestimate or underestimate all of the time-or even most of the time. Rational investors are not among the people who always see the glass as either half empty or half full.
Nobody really believes that the real-life facts fit that stylized description of investors always rationally trading off risk and return. Uncertainty is scary. Hard as we try to behave rationally, our emotions often push us to seek shelter from un
pleasant surprises. We resort to all sorts of tricks and dodges that lead us to violate the rational prescriptions. As Daniel Kahneman points out, "The failure of the rational model is not in its logic but in the human brain it requires. Who could design a brain that could perform the way this model mandates? Every single one of us would have to know and understand everything, completely and at once."' Kahneman was not the first to recognize the rigid constraints of the rational model, but he was one of the first to explain the consequences of that rigidity and the manner in which perfectly normal human beings regularly violate it.
If investors have a tendency to violate the rational model, that model may not be a very reliable description of how the capital mar kets behave. In that case, new measures of investment risk would be in order.
Consider the following scenario. Last week, after weeks of indecision, you finally liquidated your long-standing IBM position at $80 share. This morning you check your paper and discover that IBM is selling at $90. The stock you bought to replace IBM is down a little. How do you react to this disappointing news?
Your first thought might be whether you should tell your spouse about what has happened. Or you might curse yourself for being impatient. You will surely resolve to move more slowly in the future before scrapping a long-term investment, no matter how good an idea it seems. You might even wish that IBM had disappeared from the market the instant you sold it, so that you would never learn how it performed afterward.
The psychologist David Bell has suggested that "decision regret" is the result of focusing on the assets you might have had if you had made the right decision.2 Bell poses the choice between a lottery that pays $10,000 if you win and nothing if you lose versus $4,000 for certain. If you choose to play the lottery and lose, you tell yourself that you were greedy and were punished by fate, but then you go on about your business. But suppose you choose the $4,000 certain, the more conservative choice, and then find out that the lottery paid off at $10,000. How much would you pay never to learn the outcome?
Decision regret is not limited to the situation in which you sell a stock and then watch it go through the roof. What about all those stocks you never bought, many of which are performing better than the stocks you did buy? Even though everyone knows it is impossible to choose only top performers, many investors suffer decision regret over those forgone assets. I believe that this kind of emotional insecurity has a lot more to do with decisions to diversify than all of Harry Markowitz's most elegant intellectual perorations on the subject-the more stocks you own, the greater the chance of holding the big winners!
A similar motivation prompts investors to turn their trading over to active portfolio managers, despite evidence that most of them fail to outperform the major market indexes over the long run. The few who do succeed on occasion tend to show little consistency from year to year; we have already seen how difficult it was to distinguish between luck and skill in the cases of American Mutual and AIM Constellation.* Yet the law of averages predicts that about half the active managers will beat the market this year. Shouldn't your manager be among them? Somebody is going to win out, after all.
The temptations generated by thoughts of forgone assets are irresistible to some people. Take Barbara Kenworthy, who was manager of a $600 million bond portfolio at Prudential Investment Advisors in May 1995. The Wall Street journal quoted Ms. Kenworthy as saying, "We're all creatures of what burned us most recently."' To explain what she meant, the Journal commented, "Ms. Kenworthy is plunging into long-term bonds again despite her reckoning that value isn't quite there, because not to invest would be to momentarily lag behind the pack." The reporter, with a sense of the ironic, then remarked, "This is an intriguing time horizon for an investor in 30-year bonds."
Imagine yourself as an investment adviser trying to decide whether to recommend Johnson & Johnson or a start-up biogenetic company to a client. If all goes well, the prospects for the start-up company are dazzling; Johnson & Johnson, though a lot less exciting, is a good value at its current price. And Johnson & Johnson is also a "fine" company with a widely respected management team. What will you do if you make the wrong choice? The day after you recommend the start-up company, its most promising new drug turns out to be a wash-out. Or right after you recommend Johnson & Johnson, another pharmaceutical company issues a new product to compete with its biggest-selling drug. Which outcome will generate less decision regret and make it easier to go on working with a disgruntled client?
Keynes anticipated this question in The General Theory. After describing an investor with the courage to be "eccentric, unconventional and rash in the eyes of average opinion," Keynes says that his success "will only confirm the general belief in his rashness; and ... if his decisions are unsuccessful ... he will not receive much mercy. Worldly wisdom teaches that it is better for reputation to fail conventionally than to succeed unconventionally."4
Prospect Theory confirms Keynes's conclusion by predicting which decision you will make. First, the absolute performance of the stock you select is relatively unimportant. The start-up company's performance as compared with Johnson & Johnson's performance taken as a reference point is what matters. Second, loss aversion and anxiety will make the joy of winning on the start-up company less than the pain if you lose on it. Johnson & Johnson is an acceptable "long-term" holding even if it often underperforms.
The stocks of good companies are not necessarily good stocks, but you can make life easier by agreeing with your clients that they are. So you advise your client to buy Johnson & Johnson.
I am not making up a story out of whole cloth. An article in The Wall Street Journal of August 24, 1995, goes on at length about how professional investment managers have grown leery of investing in financial instruments known as derivatives-the subject of the next chapter-as a result of the widely publicized disasters at Procter & Gamble and in Orange County, California, among others. The article quotes John Carroll, manager of GTE Corporation's $12 billion pension fund: "If you made the right call and used derivatives, you might get a small additional return. But if you make the wrong call, you could wind up unemployed, with a big dent in your credibility as an investor." Andrew Turner, director of research at a leading consulting firm for institutional investors, adds, "Even if you keep your job, you don't want to get labeled as [someone] who got snookered by an investment bank." A major Boston money manager agrees: "If you buy comfortable-looking ... stocks like Coca Cola, you're taking very little career risk because clients will blame a stupid market if things go wrong."
With Richard Thaler in the vanguard, a group of academic economists have responded to flaws in the rational model by launching a new field of study called "behavioral finance." Behavioral finance analyzes how investors struggle to find their way through the give and take between risk and return, one moment engaging in cool calculation and the next yielding to emotional impulses. The result of this mixture between the rational and not-so-rational is a capital market that itself fails to perform consistently in the way that the theoretical models predict that it will perform.
Meir Statman, a professor in his late forties at the University of Santa Clara, describes behavioral finance as "not a branch of standard finance: it is its replacement with a better model of humanity."5 We might dub the members of this group the Theory Police, because they are constantly checking to see whether investors are obeying or disobeying the laws of rational behavior as laid down by the Bernoullis, Jevons, von Neumann, Morgenstern, and Markowitz.
Richard Thaler started thinking about these problems in the early 1970s, while working on his doctoral dissertation at the University of Rochester, an institution known for its emphasis on rational theory.' His subject was the value of a human life, and he was trying to prove that the correct measure of that value is the amount people would be willing to pay to save a life. After studying risky occupations like mining and logging, he decided to take a break from the demanding statistical modeling he was doing and began to ask people what va
lue they would put on their own lives.
He started by asking two questions. First, how much would you be willing to pay to eliminate a one-in-a-thousand chance of immediate death? And how much would you have to be paid to accept a one-ina-thousand chance of immediate death? He reports that "the differences between the answers to the two questions were astonishing. A typical answer was `I wouldn't pay more than $200, but I wouldn't accept an extra risk for $50,000!"' Thaler concluded that "the disparity between buying and selling prices was very interesting."
He then decided to make a list of what he called "anomalous behaviors"-behaviors that violated the predictions of standard rational theory. The list included examples of large differences between the prices at which a person would be willing to buy and sell the same item. It also included examples of the failure to recognize sunk costs-money spent that would never be recouped-as with the $40 theater ticket in the previous chapter. Many of the people he questioned would "choose not to choose regret." In 1976, he used the list as the basis for an informal paper that he circulated only to close friends and "to colleagues I wanted to annoy."
Shortly thereafter, while attending a conference on risk, Thaler met two young researchers who had been converted by Kahneman and Tversky to the idea that so-called anomalous behavior is often really normal behavior, and that adherence to the rules of rational behavior is the exception. One of them later sent Thaler a paper by Kahneman and Tversky called "Judgment Under Uncertainty." After reading it, Thaler remarks, "I could hardly contain myself."' A year later, he met Kahneman and Tversky and he was off and running.