This is a good book to read conjunction with Gladwell’s Outliers, it touches on some of the same topics, but from a much different perspective. The author is a Professor at Caltech where he teaches students about the science and mathematics of randomness and probabilities. In spite of this Mlodinow produces a book that is only slightly less accessible than that of the more popular Gladwell. One can, if they wish, skim over the descriptions of how our knowledge of these processes evolved over the centuries and still benefit from the examples and conclusions that are presented.
While Gladwell dwells on how particular circumstances can lead to advantageous results for individuals, Mlodinow focuses on how often what we view as cause and effect is really just the result of random processes occurring in very complex systems (the lives of human beings). The two approaches are not unrelated, but they are more complementary than supplementary. Most of Mlodinow’s work looks at the distributed results of a group of essentially equal individuals, such as professional athletes, mutual fund operators, and Hollywood executives, and analyzes and illustrates the role of randomness in their success relative to their peers. By Gladwell’s logic, these people are already successful. He is more concerned with how and why these individuals attained that level of success while others did not. His parameters are things like age, gender, race, education, and wealth, not what we would normally consider random occurrences.
Hopefully it will be found interesting to collect a few of Mladinow’s observations on the effects of randomness and couple them with some of the descriptions of how the human intellect is not wired to deal effectively with random processes.
The author culminates his narrative on randomness and probabilities by describing the "normal accident theory." This is a concept that is ascribed to Yale sociologist Charles Perrow. This concept was developed after studying the Three Mile Island incident where a series of minor issues cascaded into a near disaster (compare this with Gladwell’s description of why we have aircraft accidents). In Mlodinow’s words:
"...in complex systems (among which I count our lives) we should expect that minor factors we can usually ignore will by chance sometimes cause major incidents.....Called normal accident theory, Perrow’s doctrine describes how that happens—how accidents can occur without clear causes, without those glaring errors and incompetent villains sought by corporate and government commissions. But although normal accident theory is a theory of why, inevitably, things sometimes go wrong, it could also be flipped around to explain why, inevitably, they sometimes go right. For in a complex undertaking, no matter how many times we fail, if we keep trying there is often a good chance we will eventually succeed....The normal accident theory of life shows not that the connection between actions and rewards is random but that random influences are as important as our qualities and actions."
Much of what the author discusses can be thought of and understood in terms of a simple coin tossing experiment (provided you can occasionally think in terms of coins with more than two sides). Instead of a coin with heads or tails for sides, think of one that has a "good" side and a "bad" side. So here we are tooling through life pursuing our goals as best we can while being buffeted by a number of minor but random occurrences ( a long red light that causes us to be late for a meeting for example). These perturbations can have positive or negative effects. What we know about tossing coins is that eventually the numbers of heads and tails will approach the same value. What we don’t usually consider is that while we spend our time tossing this coin there is a considerable probability that we will throw five or ten straight heads or tails. Or consider millions of people tossing coins. There will be a number of people who will experience long strings of good or bad perturbations. Thus are stars born while others are damned to lives of misery and frustration.
The author provides a number of examples where randomness seems an inevitable explanation. Many of the most interesting ones come from the worlds of art and finance. Consider the popularity of a piece of music.
"For their study they recruited 14,341 participants who were asked to listen to, rate, and if they desired, download 48 songs by bands they had not heard of. Some of the participants were allowed to view data on the popularity of each song—that is on how many participants had downloaded it. These participants were divided into eight separate "worlds" and could only see the data on downloads of people in their own world...each world evolved independently. If the deterministic view of the world were true the same songs should have dominated in the eight worlds....But the researchers found exactly the opposite: the popularity of individual songs varied widely among the different worlds....In this experiment, as one song or another by chance got an early edge in downloads, its seeming popularity influenced future shoppers (Tipping Point?). It is a phenomenon well known in the movie industry: movie goers will report liking a movie more when they hear beforehand how good it is."
The deterministic view would have you believe that "experts" study the buying habits and preferences of customers and predict what will be the next hit or best seller. That is, they study the past and try to replicate it. Mlodinow takes great joy in pointing out:
"John Grisham’s manuscript for A Time to Kill was rejected by twenty-six publishers; his second manuscript for The Firm drew interest from publishers only after a bootleg copy circulating in Hollywood drew a $600,000 offer for the movie rights. Dr. Seuss’s first children’s book, And To Think That I Saw It On Mulberry Street, was rejected by twenty-seven publishers. And J. K. Rowling’s first Harry Potter manuscript was rejected by nine."
Sometimes you have to flip that coin many times before you get a desired result. And then there is this quote from a Hollywood executive.
"If I had said yes to all the projects I turned down and no to all the ones I took, it would have worked out about the same."
The statistics of small numbers is especially relevant to the movie industry. The author devotes some space to delineating histories of the success, the lack thereof, and sometimes both success and failure of various film studio executives to support the following statement.
"That means that if each of 10 Hollywood executives tosses 10 coins, although each has an equal chance of being the winner or the loser, in the end there will be winners and losers. In this example, the chances are 2 out of 3 that at least one of the executives will score 8 or more heads or tails."
The unlucky will soon be unemployed while the lucky are lavishly rewarded with money and accolades. If they are really smart they will move on to another position before their luck changes, presumably to a higher position where they can dispose of the poor souls who are visited with a run of bad luck.
Mlodinow also lobs a few randomness examples at the financial industry. He considers the performance of 800 mutual funds over a five year period. He plots the performance of each fund relative to the mean in ascending order so that entry 800 is the highest performer, and the first is the lowest. A smooth curve is obtained by plotting these points with the first 400 being negative and the second 400 being positive. A knowledgeable investor might come up with numerous reasons why any given fund performed as well or as poorly as it did. An amateur investor would certainly have a hard time selecting any fund out of the bottom 400. Mlodinow then plots the performance of these funds against the median for the succeeding five year period, but he maintains each fund at the same location on the axis that they had earned initially. If past performance is a predictor of future performance, or if the performance is a result solely of the acumen of each fund’s manager one would expect a roughly similar curve to appear. Instead, any correlation between past and current performance disappears and one is left with what the author describes as random noise.
"People systematically fail to see the role of chance in the success of ventures and in the success of the equity fund manager....And we unreasonably believe that the mistakes of the past must be consequences of ignorance or incompetence and could have been remedied by further study and improved insight. That’s why, for example, in spring 2007, when the stock of Merrill Lynch was trading around $95 a share, its CEO E. Stanley O’Neal could be celebrated as the risk-taking genius responsible, and in the fall of 2007, after the credit market collapsed, derided as the risk-taking cowboy responsible—and promptly fired. We afford automatic respect to superstar business moguls, politicians, and actors and to anyone flying around in a private jet, as if their accomplishments must reflect unique qualities not shared by those forced to eat commercial-airline food. And we place too much confidence in the overly precise predictions of people—political pundits, financial experts, business consultants—who claim a track record demonstrating expertise."
Mlodinow’s point is that the world is a complicated place and not easily understood even in retrospect. Extrapolating to the future is extremely difficult, even extremely unlikely perhaps. There are undoubtedly people who have an exceptional grasp of a particular situation and can be more accurate than the average person, but how does one decide who that person is if, generally, results are consistent with randomness. Mlodinow’s advice:
"It is more reliable to judge people by analyzing their abilities than by glancing at the scoreboard. Or as Bernoulli put it, ‘One should not appraise human action on the basis of its results’."
Trusting that someone who was correct one, two or three times will be correct the next time may not be a defendable strategy.
The author discuses the difficulties people have dealing with situations where random variables are in play.
"We often employ intuitive processes when we make assessments and choices in uncertain situations. Those processes no doubt carried an evolutionary advantage when we had to decide whether a saber-toothed tiger was smiling because it was fat and happy or because it was famished and saw us as its next meal. But the modern world has a different balance, and today those intuitive processes come with drawbacks. When we use our habitual ways of thinking to deal with today’s tigers, we can be led to decisions that are less than optimal or even incongruous.....The greatest challenge in understanding the role of randomness in life is that although the basic principles of randomness arise from everyday logic, many of the consequences that follow from those principles prove counterintuitive....The mechanisms by which people analyze situations involving chance are an intricate product of evolutionary factors, brain structure, personal experience, knowledge, and emotion. In fact, the human response to uncertainty is so complex that sometimes different structures within the brain come to different conclusions and apparently fight it out to determine which one will dominate."
Mlodinow lists three types of situations where people get in trouble. Two of these are evolution driven, the third is a result of either not understanding the situation or not understanding how probabilities work, or both.
The author describes "naive realism" as the belief that things are what they seem. He provides an interesting example from the life of a scientist named Daniel Kahneman. At the time Kahneman was a psychology professor. He was given the task of lecturing a class of flight instructors on the latest knowledge related to behavior modification and how it might be applied to flight training. Studies with animals had taught him that positive reinforcement was the best way to produce results. The flight instructors all protested that their experience contradicted this claim. They had concluded that if you yell at someone for performing poorly they will likely do better the next time while complimenting them for a good performance usually means they will generally do worse the next time. Kahneman pondered over this at length and eventually came up with an explanation that changed his career path and he eventually ended up with a Nobel Prize in economics for his studies on how and why people make the decisions they do.
"The student pilots all had a certain personal ability to fly fighter planes. Raising their skill level involved many factors and required extensive practice, so although their skill was slowly improving through flight training, the change wouldn’t be noticeable from one maneuver to the next. Any especially good or especially poor performance was thus mostly a matter of luck. So if a pilot made an exceptionally good landing—one far above his normal level of performance—then the odds would be good that he would perform closer to his norm—that is, worse—the next day. And if the instructor had praised him, it would appear that the praise had done no good. But if a pilot made an exceptionally bad landing—running the plane off the runway and into a vat of corn chowder in the base cafeteria—then the odds would be good that the next day he would perform closer to his norm—that is, better. And if his instructor had a habit of screaming ‘you clumsy ape’ when a student performed poorly, it would appear that his criticism did some good. In this way an apparent pattern would emerge.....the instructors in Kahneman’s had concluded from such experiences that their screaming was a powerful educational tool. In reality it made no difference at all."
There is a related issue that complicates our reasoning and leads us to deduce false patterns. Mlodinow refers to this as the "availability bias." This bias leads to emphasizing excessively memories that are most vivid and accessible in our past and deducing patterns that in fact do not exist.
"How probable is it that of the five lines at the grocery-store checkout you will choose the one that takes the longest. Unless you’ve been cursed by a practitioner of the black arts, the answer is around 1 in 5. So why, when you look back, do you get the feeling that you have a supernatural knack for choosing the longest line? Because you have more important things to focus on when things go right, but it makes an impression when the lady in front of you with a single item in her cart decides to argue about why her chicken is priced at $1.50 a pound when she is certain the sign at the meat counter said $1.49."
Need For Certainty
People seem to be wired to search for some organization or pattern in their observations even if the events are completely random. There is presumably some evolutionary advantage to this approach, but it can now lead to incorrect conclusions. Mlodinow dwells on how we come to view both successful and unsuccessful people and how we feel a need to explain success or failure as resulting from superior attributes or critical defects.
"Obviously it can be a mistake to assign brilliance in proportion to wealth. We cannot see a person’s potential, only his or her results, so we often misjudge people by thinking that the results must reflect the person."
It is not surprising to be told that we tend to assume that successful people have some innate qualities that justify and explain their success. What is surprising, and somewhat troubling, is that studies show that we will employ the same approach to people who would be described as failures. In their case we feel a need to justify their fate by assuming what befell them was due to some fault of their own. In viewing a homeless person we will tend to assume that person has some defect that put him in that situation.
"On an emotional level many people resist the idea that random influences are important even if, on an intellectual level, they understand that they are. If people underestimate the role of chance in the careers of moguls, do they also downplay its role in the lives of the least successful? In the 1960s that question inspired the social psychologist Melvin Lerner to look into society’s negative attitudes toward the poor. Realizing that ‘few people would engage in extended activity if they believed that there were a random connection between what they did and the rewards they received,’ Lerner concluded that ‘for the sake of their own sanity,’ people overestimate the degree to which ability can be inferred from success."
Lerner conducted controlled experiments in which a group of people observed one of their members undergo what appeared to be a painful electrical shock whenever the person failed at a learning exercise. The person was a plant who acted out the role but this was unknown to the remaining observers.
"At first, as expected, most of the observers reported being extremely upset by their peer’s unjust suffering. But as the experiment continued, their sympathy for the victim began to erode. Eventually the observers, powerless to help, instead began to denigrate the victim. The more the victim suffered, the lower their opinion of her became. As Lerner had predicted, the observers had a need to understand the situation in terms of cause and effect....We unfortunately seem to be unconsciously biased against those in society who come out on the bottom."
Misunderstanding And Malfeasance
There are several relatively minor and harmless mistakes people make when faced with random processes. The most familiar is the "gambler’s fallacy." The standard example involves someone playing a slot machine and losing steadily. The assumption is often made that after so many losing attempts the odds are building up in favor of winning. In fact, the odds of winning in the future are exactly what they were when the person sat down in the first place. Another tendency is for people to put too much belief in things learned from small numbers of events. In sports there is often a seven game series to determine the best team. If the two teams are equally capable then they have an equal chance of winning. If one team is in fact better and more likely to win a given game:
"...there is a sizeable chance that the inferior team will be crowned champion. For instance if one team is good enough to warrant beating another in 55% of its games, the weaker team will nevertheless win a 7-game series about 4 times out of 10. And if the superior team could be expected to beat its opponent, on average, 2 out of each 3 times they meet, the inferior team will still win a 7-game series about once every 5 matchups."
The choice of seven games is more one of practicality than an attempt to attain a statistically significant result.
The more interesting examples are those where statistics are misused and the consequences are significant. Consider this case of doctors trying to interpret statistical results of mammograms.
"For instance, in studies in Germany and the United States, researchers asked physicians to estimate the probability that an asymptomatic woman between the ages of 40 and 50 who has a positive mammogram actually has cancer if 7 percent of mammograms show cancer when there is none. In addition, the doctors were told that the actual incidence was about 0.8 percent and the false negative rate about 10 percent. Putting that all together one can use Bayes methods to determine that a positive mammogram is due to cancer in only about 9 percent of cases. In the German group, however, one-third of the physicians concluded that the probability was about 90 percent, and the median estimate was 70 percent. In the American group, 95 out of 100 physicians estimated the probability to be around 75 percent."
This example is interesting and frightening in many ways. If out of a 1000 women in a high-risk age group, 8 will have cancer and 70 will be told they might have cancer when there is none, is it any wonder that physicians are beginning to question the efficacy of frequent mammograms in lower risk groups? And how many of these 70 women were told they had a 75% chance of having breast cancer instead of a 9% chance? We already knew they had lethal penmanship skills, but given their inability to comprehend simple arithmetic, how comforting is it to consider how much trust we put in these doctors?
Mlodinow’s examples from the legal profession are even more troubling because they arise not so much from incompetence as from an attempt to deceive. He refers to what he calls the "prosecutor’s fallacy" because statistics are often used to mislead jurors in legal cases. The first example concerns DNA testing.
"DNA experts regularly testify that the odds of a random person’s (DNA) matching that of the crime sample is less than 1 in 1 million or 1 in 1 billion. With these odds one could hardly blame the jury for thinking, throw away the key."
The author then describes the case of a jury that was so impressed with these statistics that they convicted a man for a crime even though he had eleven witnesses who placed him in another state at the time of the crime. He served 4 years (out of 3100 years) before a follow-up test indicated that the first test was in error. Was this a case of a 1 in 1 billion occurrence or a case of the jury being misled by a convenient misapplication of statistics?
"But there is another statistic that is often not presented to the jury, one having to do with the fact that labs make errors......Estimates of the error rate due to human causes vary, but many experts put it at around 1 percent. However, since the error rate of many labs has never been measured, courts often do not allow testimony on this overall statistic. Even if courts did allow testimony regarding false positives, how would jurors assess it? Most jurors assume that given the two types of error—the 1 in 1 billion accidental match and the 1 in 100 lab-error match—the overall error rate must be somewhere in between, say 1 in 500 million, which is still for most jurors beyond a reasonable doubt. But employing the laws of probability we find a much different answer.....that is, the odds are 1 in 100. Given both possible causes, therefore, we should ignore the fancy expert testimony about the odds of accidental matches and focus instead on the much higher laboratory error rate—the very data that courts often do not allow attorneys to present!"
Finally there is a most famous misapplication of probabilities and statistics from the O. J. Simpson trial.
"The renowned attorney and Harvard Law School professor Alan Dershowitz also successfully used the prosecutor’s fallacy—to help defend O.J. Simpson in his trial for the murder of Simpson’s ex-wife, Nicole Brown Simpson, and a male companion.....The prosecution made a decision to focus the opening of their case on O.J.’s propensity toward violence against Nicole....As they put it ‘a slap is a prelude to homicide.’ The defense attorneys used this strategy as a launchpad for their accusations of duplicity, arguing that the prosecution had spent two weeks trying to mislead the jury and that the evidence that O.J. had battered Nicole on previous occasions meant nothing. Here is Dershowitz’s reasoning: 4 million women are battered annually by husbands and boyfriends in the United States, yet in 1992, according to the FBI Uniform Crime Reports, a total of 1,432, or 1 in 2,500 were killed by their husbands or boyfriends. Therefore, the defense retorted, few men who slap or beat their domestic partners go on to murder them. True? Yes. Convincing? Yes. Relevant? No. The relevant number is not the probability that a man who batters his wife will go on to kill her (1 in 2,500), but rather the probability that a battered wife who was murdered was murdered by her abuser. According to the Uniform Crime Reports for the United States and Its Possessions in 1993, the probability that Dershowitz (or the prosecution) should have reported was this one: of all the battered women murdered in the United States in 1993, some 90 percent were killed by their abuser. That statistic was not mentioned at the trial....Dershowitz may have felt justified in misleading the jury because in his words ‘the courtroom oath—"to tell the truth, the whole truth, and nothing but the truth" —is applicable only to witnesses. Defense attorneys, prosecutors, and judges don’t take this oath.....indeed it is fair to say that the American justice system is built on a foundation of not telling the whole truth’."