Milo’s latest advice for investors and business people trying to come to grips with geopolitics is now available on Forbes.com. It’s called “Geopolitics, Investing and the Little Book of Psychic Cold Reading”.
Every time Milo and I teach about how organizations can make sense of their environments, we are confronted with the difficulty of explaining why uncertainty is so different from risk and why understanding that difference matters to entrepreneurs and managers. In this article, we address those questions and discuss the practical implications that flow from them.
Recently I had a discussion with a friend who is a colonel in the Army about the culture of risk among senior officers and, by extension, in management. The culture of risk is an important question for any organization.
To understand the culture of risk, we must first distinguish between two types of risks. Type One risk is where you do something that leads to an error or a bad result. It’s a reasonable assumption that the majority of our time in school and in higher education is designed to teach us how to reduce such risks.
Type Two risk is the opposite, it is the risk of not doing something that could be valuable. Of course, the two are linked: the more one reduces the risk of doing something, the more one increases the risk of not doing something valuable. The trick, unfortunately, is that we tend to focus more on the Type One than Type Two risks. On one level, this makes sense: after all, failure is very visible – a disaster, a lost war, a failed product launch, etc. In contrast, forfeited opportunity is invisible: we do not see what valuable things our caution has prevented us from doing, and no one is punished for not having invented something. Our education, liability laws and corporate governance structure push us towards a culture of Type One risk avoidance, i.e. to reduce the risk of failure (Sarbanes-Oxley anyone?). This obviously is a problem for innovation in the long term, but it doesn’t even reliably protect us. If it did nothing else, the financial crisis that began in 2008 has demonstrated that those institutions entrusted to manage risk failed to do so properly. In short, we focus on risk avoidance at the expense of opportunity creation, and we don’t avoid even risk very well!
Karl Weick has long been known for his work on organization theory. In particular, his work focuses on how organizations make sense of complex and uncertain environments. Among Weick’s most famous works is the study of the fire in Mann Gulch, an initially banal forest fire in 1949 that went wrong and resulted in the deaths of 13 firefighters. Weick’s analysis shows that in such conditions, a professional team faces what he calls a ‘cosmological event’, ie an event so unexpected and powerful that it destroys the will and the ability of the victims to act, in particular to act as a group. It is a great piece of scholarship.
“Managing the Unexpected” explores how the organization can handle the unexpected. To do so, Weick and Sutcliffe chose to study organizations that are specifically created with that end in mind, which they call High-Reliability Organizations (HRO): firefighters, crew of a submarine, the control center of a nuclear plant, etc. What principles do these organizations implement to operate?
In an earlier post about forecasting, I mentioned the work by Nassim Taleb on the concept of black swan. In his remarkable book, “The Black Swan”, Taleb describes at length the characteristics of environments that can be subject to black swans (unforeseeable, high-impact events).
When we make a forecast, we usually explicitly or implicitly base it on an assumption of continuity in a statistical series. For example, a company building its sales forecast for next year considers past sales, estimates a trend based on these sales, makes some adjustments based on current circumstances and then generates a sales forecast. The hypothesis (or rather assumption, as it is rarely explicit) in this process is that each additional year is not fundamentally different from the previous years. In other words, the distribution of possible values for next year’s sales is Gaussian (or “normal”): the probability that sales are the same is very high; the probability of an extreme variation (doubling or dropping to zero) is very low. In fact, the higher the envisaged variation, the lower the probability that such variation will occur. As a result, it is reasonable to discard extreme values in the forecasts: no marketing director is working on an assumption of sales dropping to zero.
Today I was reminded of the perils of forecasting while reviewing a Department of Defense document, the Joint Operating Environment 2010.
“JOE 2010” as it’s called, is designed to provide the various branches of the US Armed Forces a joint perspective on likely global trends, possible shocks and their future operating environment. If you’re interested in geopolitics and strategy, I recommend that you take a look.
Apart from its inherent interest, JOE 2010 opens with a defense planning timeline that business and financial strategy practitioners – and anyone who consumes their work – would do well to bear in mind. I have reproduced it verbatim here:
1900 If you are a strategic analyst for the world’s leading power, you are British, looking warily at Britain’s Age-old enemy, France.
1910 You are now allied with France, and the enemy is now Germany.
Posted in Case study, Theory
Tagged black swan, China, Defense Planning, DOD, forecasting, France, Geopolitics, Germany, grand strategy, Internet, JOE 2010, Korea, NATO, non-predictive strategy, prediction, strategic autism, strategic surprise, strategy, UK, USSR, Vietnam