The difficulty of anticipating strategic surprises is often ascribed to a ‘signal-to-noise’ problem, i.e. to the inability to pick up so-called ‘weak signals’ that foretell such surprises. In fact, monitoring of weak signals has become a staple of competitive intelligence. This is all the more so since the development of information technology that allows the accumulation and quasi-automatic processing of massive amount of data. The idea is that the identification of weak signals will enable an organization to detect a problem (or an opportunity) early and, hence, to react more quickly and more appropriately. For instance, a firm can detect a change in attitude of consumer behavior by spending time with the most advanced of them, as Nokia did in the early 1990s, a move that enabled the firm to realize that the mobile phone was becoming a fashion item.
Some firms try to detect the planned entry of a competitor in their market by monitoring the purchase of land to build a factory or the filing of patents. American journalists closely watch pizzerias around the White House: a sudden order announces a sleepless night and therefore that something important going on. Roberta Wohlstetter, an American scholar, investigated the attack on Pearl Harbor by the Japanese in December 1941, the archetype of a strategic surprise. She sought to understand how it was that the U.S. military failed to capture weak signals despite its already impressive technical means at the time. The results of her research came as a surprise: in fact, the US had a considerable amount of information on the Japanese, whose secret codes had been broken. She writes: “At the time of Pearl Harbor the circumstances of collection in the sense of access to a huge variety of data were (…) close to ideal.” You read it well: ideal. The U.S. military had captured so many weak signals that these were not weak anymore. Her conclusion? The analytical problem does not stem from lack of data, but from the inability to extract relevant information from mere data. She concludes, “the job of lifting signals out of a confusion of noise is an activity that is very much aided by hypotheses.” With that, she redefines the problem from one of accumulation of weak signals, which often is not difficult, especially nowadays with the Internet, to one of knowing what to do with this mass of signals. What to do, or what to search for, means having a hypothesis, a starting point. Indeed, the data never speak for themselves. In fact, Peter Drucker himself remarked: “Executives who make effective decisions know that one does not start with facts. One starts with opinions… To get the facts first is impossible. There are no facts unless one has a criterion of relevance.” Therefore, it is hypotheses that must drive data collection. In short, you can collect all the data you can and still make no progress at all in anticipating surprises. Therefore, the vast literature on weak signals, (for instance, Day and Schoemaker’s Peripheral Vision, which is typical) while not entirely useless, will not help you much solve the real problem, that of having good hypotheses, or opinions, to start with.
It’s not a question of quantity of data, but rather it is an epistemological problem: a purely inductive approach can not work, or worse, can be misleading. This observation was made by researchers Kahneman and Tversky with the “belief in the law of small numbers”. Too much data, and we do not know how to sort the wheat from the chaff. Not enough data, and we make erroneous inferences. In his book “Black Swan”, Nassim Taleb reformulated this problem recently with the Thanksgiving turkey (This is Bertrand Russell’s famous example of the chicken modified for a North American audience): every single feeding firms up the bird’s belief that it is the general rule of life to be fed every day by friendly humans. On the Wednesday before Thanksgiving, after hundreds of consistent observations (and following each of which, its confidence grows) the turkey has reached, unaware, the moment of maximum danger. “What,” Taleb asks, “can a turkey learn about what is in store for it tomorrow from the events of yesterday? Certainly less than it thinks.”
Of course, there are other issues raised by the weak signals approach. One of them is that it is easily subject to disinformation. We know that Bin Laden, who knew he was watched and listened, used to constantly send signals he knew his opponents would catch. Often, when he had a visitor, he would hint that “something important is about to happen.” And nothing happened. This leads to the classic syndrome of “warning fatigue”. In the same vein, too much attention to weak signals can also generate false positives and reinterpretation. For instance, Egypt is about to invade Israel. Israel learns about it, goes on full alert, which deters Egypt. The invasion does not take place and the whole thing is treated as a “false alarm”. Next time, warning signals are dismissed as yet another false alarm, and this becomes the Yom Kippur War: Israel is completely taken by surprise.
In conclusion, monitoring weak signals is necessary and useful in the toolbox of competitive intelligence and strategic surprises prevention, but it will not replace an active approach and active engagement with the environment. The monitoring of one’s own assumptions and hypotheses is as important – and we would argue more important – than the monitoring of weak signals.
Note: if you want to read more about how to discuss hypotheses in strategic decision-making under uncertainty, you can read Milo’s post: Business and Intelligence Techniques: the Role of Competing Hypotheses.