The Failure of Risk Management. Douglas W. Hubbard

Читать онлайн книгу.

The Failure of Risk Management - Douglas W. Hubbard


Скачать книгу
is working. In order to determine that the new drug is really working, patients taking the real drug have to do measurably better than those taking the placebo (which may be a sugar pill). Which patients get the placebo is even hidden from the doctors so that their diagnoses are not biased.

      An analysis placebo produces the feeling that some analytical method has improved decisions and estimates even when it has not. Placebo means “to please” and, no doubt, the mere appearance of structure and formality in risk management is pleasing to some. In fact, the analogy to a placebo is going a bit too easy on risk management. In medical research, there can actually be a positive physiological effect from a mere placebo beyond the mere perception of benefit. But when we use the term in the context of risk management we mean there literally is no benefit other than the perception of benefit. Several studies in very different domains show how it is possible for any of us to be susceptible to this effect:

       Sports picks: A 2008 study at the University of Chicago tracked probabilities of outcomes of sporting events as assigned by participants given varying amounts of information about the teams without being told the names of teams or players. As the fans were given more information about the teams in a given game, they would increase their confidence that they were picking a winner, even though the actual chance of picking the winner was nearly flat no matter how much information they were given.2 In another study, sports fans were asked to collaborate with others to improve predictions. Again, confidence went up after collaboration but actual performance did not. Indeed, the participants rarely even changed their views from before the discussions. The net effect of collaboration was to seek confirmation of what participants had already decided.3

       Psychological diagnosis: Another study showed how practicing clinical psychologists became more confident in their diagnoses and prognoses for various risky behaviors by gathering more information about patients, and yet, again, the agreement with observed outcomes of behaviors did not actually improve.4

       Investments: A psychology researcher at MIT, Paul Andreassen, did several experiments in the 1980s showing that gathering more information about stocks in investment portfolios improved confidence but without any improvement in portfolio returns. In one study, he showed how people tend to overreact to news and assume that the additional information is informative even though, on average, returns were not improved by these actions.5

       Trivia estimates: Another study investigating the benefits of collaboration asked subjects for estimates of trivia from an almanac. It considered multiple forms of interaction including the Delphi technique, free-form discussion, and other methods of collaboration. Although interaction did not improve estimates over simple averaging of individual estimates, the subjects did feel more satisfied with the results.6

       Lie detection: A 1999 study measured the ability of subjects to detect lies in controlled tests involving videotaped mock interrogations of “suspects.” The suspects were actors who were incentivized to conceal certain facts in staged crimes to create real nervousness about being discovered. Some of the subjects reviewing the videos received training in lie detection and some did not. The trained subjects were more confident in judgments about detecting lies even though they were worse than untrained subjects at detecting lies.7

      And these are just a few of many similar studies showing that we can engage in training, information gathering, and collaboration that improves confidence but not actual performance. We have no reason to believe that fundamental psychology observed in many different fields doesn't apply in risk management in business or government. The fact that a placebo exists in some areas means it could exist in other areas unless the data shows otherwise.

      They found they agreed that developing expert intuition in any field is not an automatic outcome of experience. Experts needed “high-validity” feedback so that the outcomes of estimates and decisions could be learned from. Our feedback should be consistent (we get feedback most, if not all, of the time), quick (we don't have to wait long for it), and unambiguous.

      Risk management simply does not provide the type of consistent, immediate, and clear feedback that Kahneman and Klein argue we need as a basis for learning. Risk managers make estimates, decisions, or recommendations without knowing what the effect is for some time, if ever. If risk went down after the implementation of a new policy, how would you know? How long would it take to confirm that the outcome was related to the action taken? How would you determine whether the outcome was just due to luck?

      What we will not do to measure the performance of various methods is rely on the proclamations of any expert regardless of his or her claimed level of knowledge or level of vociferousness. So even though I may finally have some credibility in claiming experience after thirty years in quantitative management consulting, I will not rely on any appeals to my authority regarding what works and what does not. I will, instead, resort to using published research from large experiments. Any mention of anecdotes or quotes from “thought leaders” will only be used to illustrate a point, never to prove it.

      The potential existence of an analysis placebo, the difficulty of learning from experience alone in risk management, and the general lack of objective measurements of performance in risk management means that we should be wary of self-assessments in this field. We should bear in mind one particular statement in the previously mentioned article by Daniel Kahneman and Gary Klein:

      True experts, it is said, know when they don't know. However, nonexperts (whether or not they think they are) certainly do not know when they don't know. Subjective confidence is therefore an unreliable indication of the validity of intuitive judgments and decisions. (p. 524)


Скачать книгу