Karl Weick has long been known for his work on organization theory. In particular, his work focuses on how organizations make sense of complex and uncertain environments. Among Weick’s most famous works is the study of the fire in Mann Gulch, an initially banal forest fire in 1949 that went wrong and resulted in the deaths of 13 firefighters. Weick’s analysis shows that in such conditions, a professional team faces what he calls a ‘cosmological event’, ie an event so unexpected and powerful that it destroys the will and the ability of the victims to act, in particular to act as a group. It is a great piece of scholarship.
“Managing the Unexpected” explores how the organization can handle the unexpected. To do so, Weick and Sutcliffe chose to study organizations that are specifically created with that end in mind, which they call High-Reliability Organizations (HRO): firefighters, crew of a submarine, the control center of a nuclear plant, etc. What principles do these organizations implement to operate?
The goal of strategy is to decide what to do in a given situation to achieve a given objective. Basically, strategic decisions comes down to the question “what to do next?”. In environments characterized by uncertainty (defined as objective lack of information), this is no simple question, and several approaches are possible to address it. Two dimensions characterize these possible approaches: prediction and control.
Prediction asks to what extent does my approach rely on a forecast of the future environment. Strong prediction corresponds to either a planning-type approach – I create a detailed prediction of the future before initiating action – or a vision type: I imagine the future and I strive to make this vision a reality. Low prediction corresponds to a more adaptive approach: I do not try to predict the future environment, but instead I move on and I adapt to changes along the way.
Control asks how I can control the evolution of my environment. The over-arching assumption of classic strategy is that the firm has little influence on its environment, which is for the most part given (or “exogenous”). All a firm can do is to find a place in this environment (planning /positioning) or adapt when it changes (adaptation). Hence the importance of the notion of “fit” that the field insists upon (e.g. Michael Porter in 1996). On the opposite side of the spectrum, the field of entrepreneurship observes that a firm can change its environment in profound ways, sometimes from an ex ante defined vision, or through the logic of future-agnostic gradual transformation of the environment. There are many examples of entrepreneurs starting with odds apparently stacked against them and completely transforming their environments: Michael Dell, Richard Branson, Sam Walton, to name just a few.
In an earlier post about forecasting, I mentioned the work by Nassim Taleb on the concept of black swan. In his remarkable book, “The Black Swan”, Taleb describes at length the characteristics of environments that can be subject to black swans (unforeseeable, high-impact events).
When we make a forecast, we usually explicitly or implicitly base it on an assumption of continuity in a statistical series. For example, a company building its sales forecast for next year considers past sales, estimates a trend based on these sales, makes some adjustments based on current circumstances and then generates a sales forecast. The hypothesis (or rather assumption, as it is rarely explicit) in this process is that each additional year is not fundamentally different from the previous years. In other words, the distribution of possible values for next year’s sales is Gaussian (or “normal”): the probability that sales are the same is very high; the probability of an extreme variation (doubling or dropping to zero) is very low. In fact, the higher the envisaged variation, the lower the probability that such variation will occur. As a result, it is reasonable to discard extreme values in the forecasts: no marketing director is working on an assumption of sales dropping to zero.
It’s become commonplace to hear, including from my fellow academic colleagues, that we academics write articles in journals that nobody reads. Students and participants in executive programs, we are often told, want practical tools that they can apply immediately in their job, and they have no patience for theory. It often goes to the point where we are asked not so much to teach as to get participants to talk about their favorite topic, ie themselves, and lead a class discussion on this anecdotal basis supported by some multimedia slides while students are transfixed by their twitter account. Some schools have even acknowledged this and claim that they don’t teach, but develop what’s already inside participants. Put otherwise, bring your own food: we repackage what you know already and you pick up the bill. The idea that we as teachers may, at some point, introduce some theoretical content increasingly seems suspect and the sure sign of out of touch academia trying to influence a world they are said not to understand. What do eggheads know about business? The idea of teaching, that we could impart some knowledge, but also exert our professional judgment on what we should teach to whom seems preposterous and a sure sign of academic arrogance. I disagree. I teach, and I make no apologies for it.
There is no doubt we are terribly bad at forecasting. Even the smartest among us are. Even the best and the brightest, whom we have tasked to save the world from financial annihilation, are. Take Ben Bernanke, Chairman of the Federal Reserve. In 2004, he declared, in a speech ominously titled “The Great Moderation”: “One of the most striking features of the economic landscape over the past twenty years or so has been a substantial decline in macroeconomic volatility. This […] makes me optimistic for the future.” You might want to read the full transcript of the “Great Moderation” talk here because it is for a fascinating reading on how wrong experts can be at forecasting. And it’s not just Ben. In fact, political, economic and business histories are littered by forecasts and predictions that turned out to be ridiculously wrong. From the commercial potential of the Xerox machine or of Nespresso, from the possibility of heavier than air flight to the market for mobile phones, from prosperity at the corner of the street to Japan as number One. Our hopelessness at forecasting is a confirmed fact.
I have discussed the topic of the use of history for decision makers in a previous post about Richard Neustadt and Ernest May‘s analog framework. Historian Francis Gavin gave a very interesting speech for the Longnow foundation on the same question, but from a different angle. Gavin lays out five key concepts which, if properly understood and employed, should provide a firmer grasp on how historical analysis can be of benefit to decision makers. I would also argue that they can benefit not just the policymakers but also the public at large. These concepts are vertical history, horizontal history, chronological proportionality, unintended consequences and policy insignificance.
China has long been touted as the next leading power, and for many it seems that the question is no longer if China will overtake the US but when. Recently, however, a number of dissenting opinions have started to be heard. Economists point to the strong imbalances in China’s economy; political analysts observe that the political and social structure is unstable; human right activists warn of increasing censorship and repression, while historians suggest that, like the USSR in the late 80s, China’s communist regime has run its course and is on an unsustainable path. Indeed, “hard landing” stories about China have started to appear, by Roubini or by Gordon Chang.
Like any such debate, or lack of debate (instead, it is a series of proclamations), positions are often taken being selective about facts, based on false analogies, shallow extrapolations, ideology, or just plain ignorance. This is problematic because regardless of what we think about China, the country does matter to us in many ways. What can we do about this, then?
The fog of war, a long 2003 interview of Robert S McNamara, shows that how one frames an issue has an influence on how a question can be solved. As soon as they got engaged in Vietnam, the US presented the conflict as a fight between freedom and communism. This happened in the late fifties, after China had become communist and right after the Korean war, in a context in which the communist world seemed to progress inexorably. The domino theory, introduced by the Republican US president Eisenhower in 1954, stated that once a country fell and became communist, neighboring countries also would. Hence it became crucial to defend any country facing a communist insurgency. As David Halberstam mentions in his book “The best and the brightest”, the US national context also played a role later in the Vietnam process: Harry Truman, Eisenhower’s Democratic predecessor, was accused during the cold war to have “lost” China in 1949 and to have been weak against the communists, particularly during the Mccarthyst period. A longstanding reputation of “Democratic weakness” persists to this day as a result. In the early 60s, the democrats were still traumatized by these accusations that were systematically used by their Republican adversaries. This is the initial cognitive frame with which the Vietnam question was analyzed by President Kennedy’s administration. Right from the beginning then, the administration was prisoner, without being aware of it, from a frame that was in effect imposed by their adversaries. Despite their doubts and mounting skepticism, they would remain unable, right until the very end, to get rid of it.
Posted in Theory
Tagged decision making, disruption, framing, GM, non-predictive strategy, Robert McNamara, Sarah Kaplan, sense making, strategy, turbulence, uncertainty, vietnam war
A central tenet of innovation research is that firms often fail to act on a disruption that threatens their business, and falter as a result. A case in point is AT&T, the 120 year-old subsidiary of Bell Telephone Company, child of Alexander Graham Bell, an American icon.
In 2005, AT&T was sold to SBC Communications. It was in a way a family story, as SBC Communications started in the mid-eighties as the smallest of the seven “baby bells”, the companies created after the regulator ordered the AT&T break-up. But what a story !
AT&T introduced many innovations, and not small ones: first commercial radio (1922), first television transmission (1927), first mobile phone (1946 !), first transistor (1947), first telecom satellite (1962). AT&T has long been a giant of the economic landscape: one million employees at the beginning of the 80s, and not so long ago a market value of $180 billion (1999).