We don’t live in the world for which conventional risk-management textbooks prepare us. No forecasting model predicted the impact of the current economic crisis, and its consequences continue to take establishment economists and business academics by surprise. Moreover, as we all know, the crisis has been compounded by the banks’ so-called risk-management models, which increased their exposure to risk instead of limiting it and rendered the global economic system more fragile than ever.
Low-probability, high-impact events that are almost impossible to forecast – we call them Black Swan events – are increasingly dominating the environment. Because of the Internet and globalisation, the world has become a complex system, made up of a tangled web of relationships and other interdependent factors. Complexity not only increases the incidence of Black Swan events but also makes forecasting even ordinary events impossible. All we can predict is that companies that ignore Black Swan events will go under.
Instead of trying to anticipate low-probability, high-impact events, we should reduce our vulnerability to them. Risk management, we believe, should be about lessening the impact of what we don’t understand – not a futile attempt to develop sophisticated techniques and stories that perpetuate our illusions of being able to understand and predict the social and economic environment.
To change the way we think about risk, we must avoid making six mistakes.
1 We think we can manage risk by predicting extreme events
This is the worst error we make, for a couple of reasons. One, we have an abysmal record of predicting Black Swan events. Two, by focusing our attention on a few extreme scenarios, we neglect other possibilities. In the process, we become more vulnerable.
It’s more effective to focus on the consequences – that is, to evaluate the possible impact of extreme events. Realising this, energy companies have finally shifted from predicting when accidents in nuclear plants might happen to preparing for the eventualities. In the same way, try to gauge how your company will be affected, compared with competitors, by dramatic changes in the environment. Will a small but unexpected fall in demand or supply affect your company a great deal? If so, it won’t be able to withstand sharp drops in orders, sudden rises in inventory, and so on.
In our private lives, we sometimes act in ways that allow us to absorb the impact of Black Swan events. We don’t try to calculate the odds that events will occur; we only worry about whether we can handle the consequences if they do. In addition, we readily buy insurance for health care, cars, houses, and so on. Does anyone buy a house and then check the cost of insuring it? You make your decision after taking into account the insurance costs. Yet in business we treat insurance as though it’s an option. It isn’t; companies must be prepared to tackle consequences and buy insurance to hedge their risks.
2 We are convinced that studying the past will help us manage risk
Risk managers mistakenly use hindsight as foresight. Alas, our research shows that past events don’t bear any relation to future shocks. World War I, the attacks of September 11, 2001 major events like those didn’t have predecessors. The same is true of price changes. Until the late 1980s, the worst decline in stock prices in a single day had been around 10 per cent. Yet prices tumbled by 23 per cent on 19 October 1987. Why then would anyone have expected a meltdown after that to be only as little as 23 per cent? History fools many.
You often hear risk managers particularly those employed in the financial services industry use the excuse “This is unprecedented”. They assume that if they try hard enough, they can find precedents for anything and predict everything. But Black Swan events don’t have precedents. In addition, today’s world doesn’t resemble the past; both interdependencies and nonlinearities have increased. Some policies have no effect for much of the time and then cause a large reaction.
People don’t take into account the types of randomness inherent in many economic variables. There are two kinds, with socioeconomic randomness being less structured and tractable than the randomness you encounter in statistics textbooks and casinos. It causes winner-take-all effects that have severe consequences. Less than 0,25 per cent of all the companies listed in the world represent around half the market capitalisation, less than 0,2 per cent of books account for approximately half their sales, less than 0,1 per cent of drugs generate a little more than half the pharmaceutical industry’s sales, and less than 0,1 per cent of risky events will cause at least half your losses.
Because of socioeconomic randomness, there’s no such thing as a ‘typical’ failure or a ‘typical’ success. There are typical heights and weights, but there’s no such thing as a typical victory or catastrophe. We have to predict both an event and its magnitude, which is tough because impacts aren’t typical in complex systems. For instance, when we studied the pharmaceuticals industry, we found that most sales forecasts don’t correlate with new drug sales. Even when companies had predicted success, they underestimated drugs’ sales by 22 times! Predicting major changes is almost impossible.
3 We don’t listen to advice about what we shouldn’t do
Recommendations of the ‘don’t’ kind are usually more robust than ‘dos’. For instance, telling someone not to smoke outweighs any other health-related advice you can provide. “The harmful effects of smoking are roughly equivalent to the combined good ones of every medical intervention developed since World War II. Getting rid of smoking provides more benefit than being able to cure people of every possible type of cancer,” points out genetics researcher Druin Burch in Taking the medicine. In the same vein, had banks in the US heeded the advice not to accumulate large exposures to low-probability, high-impact events, they wouldn’t be nearly insolvent today, although they would have made lower profits in the past.
Psychologists distinguish between acts of commission and those of omission. Although their impact is the same in economic terms – a dollar not lost is a dollar earned – risk managers don’t treat them equally.
They place a greater emphasis on earning profits than they do on avoiding losses. However, a company can be successful by preventing losses while its rivals go bust – and it can then take market share from them. In chess, grand masters focus on avoiding errors; rookies try to win. Similarly, risk managers don’t like not to invest and thereby conserve value. But consider where you would be today if your investment portfolio had remained intact over the past two years, when everyone else’s fell by 40 per cent. Not losing almost half your retirement is undoubtedly a victory.
Positive advice is the province of the charlatan. The business sections in bookstores are full of success stories; there are far fewer tomes about failure. Such disparagement of negative advice makes companies treat risk management as distinct from profit making and as an afterthought. Instead, corporations should integrate risk-management activities into profit centres and treat them as profit-generating activities, particularly if the companies are susceptible to Black Swan events.
4 We assume that risk can be measured by standard deviation
Standard deviation – used extensively in finance as a measure of investment risk – shouldn’t be used in risk management. The standard deviation corresponds to the square root of average squared variations – not average variations. The use of squares and square roots makes the measure complicated. It only means that, in a world of tame randomness, around two-thirds of changes should fall within certain limits (the 1 and +1 standard deviations) and that variations in excess of seven standard deviations are practically impossible. However, this is inapplicable in real life, where movements can exceed 10, 20, or sometimes even 30 standard deviations. Risk managers should avoid using methods and measures connected to standard deviation, such as regression models, R-squares, and betas.
Standard deviation is poorly understood. Even quantitative analysts don’t seem to get their heads around the concept. In experiments we conducted in 2007, we gave a group of quants information about the average absolute movement of a stock (the mean absolute deviation), and they promptly confused it with the standard deviation when asked to perform some computations. When experts are confused, it’s unlikely that other people will get it right. In any case, anyone looking for a single number to represent risk is inviting disaster.
5 We don’t appreciate that what’s mathematically equivalent isn’t psychologically so
In 1965, physicist Richard Feynman wrote in The character of physical law that two mathematically equivalent formulations can be unequal in the sense that they present themselves to the human mind in different ways. Similarly, our research shows that the way a risk is framed influences people’s understanding of it. If you tell investors that, on average, they will lose all their money only every 30 years, they are more likely to invest than if you tell them they have a 3,3 per cent chance of losing a certain amount each year.
The same is true of airplane rides. We asked participants in an experiment: “You are on vacation in a foreign country and are considering flying a local airline to see a special island. Safety statistics show that, on average, there has been one crash every 1 000 years on this airline. It is unlikely you’ll visit this part of the world again. Would you take the flight?” All the respondents said they would.
We then changed the second sentence so it read: “Safety statistics show that, on average, one in 1 000 flights on this airline has crashed.” Only 70 per cent of the sample said they would take the flight. In both cases, the chance of a crash is 1 in 1 000; the latter formulation simply sounds more risky.
Providing a best-case scenario usually increases the appetite for risk. Always look for the different ways in which risk can be presented to ensure that you aren’t being taken in by the framing or the math.
6 We are taught that efficiency and maximising shareholder value don’t tolerate redundancy
Most executives don’t realize that optimisation makes companies vulnerable to changes in the environment. Biological systems cope with change; Mother Nature is the best risk manager of all. That’s partly because she loves redundancy. Evolution has given us spare parts we have two lungs and two kidneys, for instance that allow us to survive.
In companies, redundancy consists of apparent inefficiency: idle capacities, unused parts, and money that isn’t put to work. The opposite is leverage, which we are taught is good. It isn’t; debt makes companies – and the economic system – fragile. If you are highly leveraged, you could go under if your company misses a sales forecast, interest rates change, or other risks crop up. If you aren’t carrying debt on your books, you can cope better with changes.
Overspecialisation hampers companies’ evolution. David Ricardo’s theory of comparative advantage recommended that for optimal efficiency, one country should specialise in making wine, another in manufacturing clothes, and so on. Arguments like this ignore unexpected changes. What will happen if the price of wine collapses? In the 1800s many cultures in Arizona and New Mexico vanished because they depended on a few crops that couldn’t survive changes in the environment.
One of the myths about capitalism is that it is about incentives. It is also about disincentives. No one should have a piece of the upside without a share of the downside. However, the very nature of compensation adds to risk. If you give someone a bonus without clawback provisions, he or she will have an incentive to hide risk by engaging in transactions that have a high probability of generating small profits and a small probability of blowups. Executives can thus collect bonuses for several years. If blowups eventually take place, the managers may have to apologise but won’t have to return past bonuses. This applies to corporations, too. That’s why many CEOs become rich while shareholders stay poor. Society and shareholders should have the legal power to get back the bonuses of those who fail us. That would make the world a better place.
Moreover, we shouldn’t offer bonuses to those who manage risky establishments such as nuclear plants and banks. The chances are that they will cut corners in order to maximise profits. Society gives its greatest risk-management task to the military, but soldiers don’t get bonuses.
Remember that the biggest risk lies within us: We overestimate our abilities and underestimate what can go wrong. The ancients considered hubris the greatest defect, and the gods punished it mercilessly. Look at the number of heroes who faced fatal retribution for their hubris: Achilles and Agamemnon died as a price of their arrogance; Xerxes failed because of his conceit when he attacked Greece; and many generals throughout history have died for not recognising their limits. Any corporation that doesn’t recognise its Achilles’ heel is fated to die because of it.
©2014 Harvard Business School Publishing Corp.This article was first published in Harvard Business Review, October 2009.