Competitive intelligence and strategic surprises: Why monitoring weak signals is not the right approach

The difficulty of anticipating strategic surprises is often ascribed to a ‘signal-to-noise’ problem, i.e. to the inability to pick up so-called ‘weak signals’ that foretell such surprises.  In fact, monitoring of weak signals has become a staple of competitive intelligence.  This is all the more so since the development of information technology that allows the accumulation and quasi-automatic processing of massive amount of data.  The idea is that the identification of weak signals will enable an organization to detect a problem (or an opportunity) early and, hence, to react more quickly and more appropriately.  For instance, a firm can detect a change in attitude of consumer behavior by spending time with the most advanced of them, as Nokia did in the early 1990s, a move that enabled the firm to realize that the mobile phone was becoming a fashion item.

Some firms try to detect the planned entry of a competitor in their market by monitoring the purchase of land to build a factory or the filing of patents.  American journalists closely watch pizzerias around the White House: a sudden order announces a sleepless night and therefore that something important going on.  Roberta Wohlstetter, an American scholar, investigated the attack on Pearl Harbor by the Japanese in December 1941, the archetype of a strategic surprise.  She sought to understand how it was that the U.S. military failed to capture weak signals despite its already impressive technical means at the time.  The results of her research came as a surprise:  in fact, the US had a considerable amount of information on the Japanese, whose secret codes had been broken.  She writes: “At the time of Pearl Harbor the circumstances of collection in the sense of access to a huge variety of data were (…) close to ideal.” You read it well:  ideal.  The U.S. military had captured so many weak signals that these were not weak anymore.  Her conclusion? The analytical problem does not stem from lack of data, but from the inability to extract relevant information from mere data.  She concludes, “the job of lifting signals out of a confusion of noise is an activity that is very much aided by hypotheses.”  With that, she redefines the problem from one of accumulation of weak signals, which often is not difficult, especially nowadays with the Internet, to one of knowing what to do with this mass of signals.  What to do, or what to search for, means having a hypothesis, a starting point.  Indeed, the data never speak for themselves. In fact, Peter Drucker himself remarked: “Executives who make effective decisions know that one does not start with facts. One starts with opinions… To get the facts first is impossible. There are no facts unless one has a criterion of relevance.”  Therefore, it is hypotheses that must drive data collection. In short, you can collect all the data you can and still make no progress at all in anticipating surprises. Therefore, the vast literature on weak signals, (for instance, Day and Schoemaker’s Peripheral Vision, which is typical) while not entirely useless, will not help you much solve the real problem, that of having good hypotheses, or opinions, to start with.

It’s not a question of quantity of data, but rather it is an epistemological problem:  a purely inductive approach can not work, or worse, can be misleading.  This observation was made by researchers Kahneman and Tversky with the “belief in the law of small numbers”.  Too much data, and we do not know how to sort the wheat from the chaff.  Not enough data, and we make erroneous inferences.  In his book “Black Swan”, Nassim Taleb reformulated this problem recently with the Thanksgiving turkey (This is Bertrand Russell’s famous example of the chicken modified for a North American audience):  every single feeding firms up the bird’s belief that it is the general rule of life to be fed every day by friendly humans.  On the Wednesday before Thanksgiving, after hundreds of consistent observations (and following each of which, its confidence grows) the turkey has reached, unaware, the moment of maximum danger.  “What,” Taleb asks, “can a turkey learn about what is in store for it tomorrow from the events of yesterday? Certainly less than it thinks.”

Of course, there are other issues raised by the weak signals approach.  One of them is that it is easily subject to disinformation.  We know that Bin Laden, who knew he was watched and listened, used to constantly send signals he knew his opponents would catch.  Often, when he had a visitor, he would hint that “something important is about to happen.”  And nothing happened.  This leads to the classic syndrome of “warning fatigue”.  In the same vein, too much attention to weak signals can also generate false positives and reinterpretation.  For instance, Egypt is about to invade Israel.  Israel learns about it, goes on full alert, which deters Egypt. The invasion does not take place and the whole thing is treated as a “false alarm”.  Next time, warning signals are dismissed as yet another false alarm, and this becomes the Yom Kippur War: Israel is completely taken by surprise.

In conclusion, monitoring weak signals is necessary and useful in the toolbox of competitive intelligence and strategic surprises prevention, but it will not replace an active approach and active engagement with the environment.  The monitoring of one’s own assumptions and hypotheses is as important – and we would argue more important – than the monitoring of weak signals.

Note: if you want to read more about how to discuss hypotheses in strategic decision-making under uncertainty, you can read Milo’s post: Business and Intelligence Techniques: the Role of Competing Hypotheses.

11 responses to “Competitive intelligence and strategic surprises: Why monitoring weak signals is not the right approach

  1. Interesting post – and I agree with you.

    There’s another approach to seeing what’s next that helps sort the wheat from the chaff of weak signals. It relates to hypotheses but not totally.

    When you have a collection of weak signals don’t treat them all the same. Categorise them. Are they about the target’s capabilities? Put these in box 1. Are they are their strategy? Box 2. Goals. Box 3. Assumptions. Box 4. Anything else box 5.

    You then look at each of boxes 1-4 and compare them to other information you have and see whether they fit in. You can also use Porter’s 4-corners approach to analyse whether they agree or contradict each other. That can help sort out what to do with them.

    With box 5 – try and work out why it is box 5. (It may be that you have information but no target to pin it to, for example – so can’t do the above).

    For all the information – and especially box 5 information, you should look at the information source and try and understand why the information became available. Do this and you often gain an idea on its veracity.

    The problem then becomes not the analysis of information but the quantity. Too much information and you start to drown and can’t categorise it – it’s not a computer job, but a human job. In this case one approach is to do the above with a random sample of information – depending on your confidence needs and the quantity.

    Of course none of these solves the thanksgiving turkey problem totally. In this case, you have lots of consistent information coming in saying “humans provide food”. In fact your analysis is at fault – not the information. If you look at the source of the information and try and understand it, you may (emphasis on may) come up with the true. For example:
    Information: Humans provide food.
    Source: observation that humans give food every day from multiple reliable sources.
    Now question the reason / objectives behind this observation. Why was this observation available? You come up with the hypothesis approach you suggest – so you have to test each set of consistent observation against the hypotheses and see which matches. Then choose your own strategy based on an assessment of risk.
    So potential hypotheses:
    “1 humans like me and so feed me” (i.e. humans are nice)
    “2 humans feed me for some other reason” (i.e. humans are not nice).

    Until other information comes in to justify hypothesis 1, hypothesis 2 is the safer one to adopt as even if hypothesis 1 is true, you won’t get hurt by adopting a strategy predicated on hypothese 2. (You may not eat so much and be called skinny by all the other turkeys near you. However you are less likely to be killed).

  2. Pingback: Analysing weak signals for competitive intelligence « Find It Out – Research Secrets and More

  3. Margaret Dospiljulian

    Like and agree with much of MIlo’s post and the response from awareci.

    What I’d like to offer is that gathering strategic intelligence is great as long as that data doesn’t just sit in a repository of boxes. As awareci’s post indicates, having boxes to categorize with questions that allow you to look at the data is a great start. When working with a client on strategic consulting engagements, the next step is to find out or hypothesize at what the organization’s vision is–where they want to go–to determine the relativity of the incoming data.

    In the case of the turkey, the human’s vision from a long-living turkey’s perspective would that humans like to take out the fat birds once a year. Obviously, the new turkey has limited ability to learn this unless there are other turkeys who’ve been in the pen more than a year. But if there are, then he’d be able to gain added data, which, if he emulates, would potentially keep him around for at least another year during which time he could continue to gather added real time data as he validates the hypothesis of the human’s vision by seeing which birds are taken. This ensures that the data gathered becomes actionable information that arms the turkey with the ability to do something.

    In the case of business, it’s the same, when signals or data are gathered…it’s important to try to understand/hypothsize what the originating organization’s overall vision is…that is what sets the stage for its strategies, goals, why they are working on beefing up specific capabilities, etc.. If the data gathered doesn’t fit with the stated or understood vision, that’s when a flag should go up to determine what’s changing or to determine that it’s rogue data. In this way, the data becomes actionable verus just a repository of categorized “stuff”.
    Margaret Dospiljulian, Dospil & Associates

    • Hi Margaret
      Thanks for your interesting insights. One of the difficulties, and a limit of Taleb’s example, is that Thanksgiving is a repeated event. It doesn’t happen often, but it happens regularly. Hence it is a rare event, but not a unique event. 9/11 or Fukushima are unique events, they will never ever be repeated exactly. We can learn from rare event, and you suggest how, but we cannot learn from unique events. Or if we can, it is useless. learning assumes a history from which we can learn.
      Best
      Philippe

      • One problem I have with the Black Swan idea is the idea that things can’t be predicted (except in hindsight).

        The idea behind the book’s title – of all swans being white until a black one is found, is valid. If you put that to a hypothesis test – “All swans are white” vs. “Swans can be other colours” then it makes sense to choose the 1st hypothesis if you have no evidence to the contrary. Both are harmless.

        In the case of the turkey on thanksgiving, it could be this situation in that if all turkeys are fed by humans, you have no real reason to suspect an ulterior motive for being fed. You’d need to go deeper to understand how the relationship started which of course turkeys can’t do. (i.e. they won’t ask how come humans feed turkeys, or whether there are turkeys in the world who aren’t fed by humans, and so on).

        However that is what humans have to do if they want to create robust strategies that cope with rare or unique events. They need to think about origins and how things started. It’s why stories (creation stories, evolution, the “big bang” theory, are so powerful in capturing the non-scientific imagination).

        So when something is consistent and doesn’t change, the first question should always be “why doesn’t it change”. If there is a good reason for this, then that will govern the hypothesis testing and the strategy. Generally however there is no good reason for lack of change. (As an example, go back 150 years and ask “will horse and cart be the dominant local transport method?” Quickly you’d come up with the answer “not necessarily” as long-distance transport had changed due to the railway. Also the horse and cart had substitutes (e.g. walking) so things weren’t totally consistent. Change could be anticipated – even if the exact change couldn’t be. Moreover when the car was invented, it wasn’t an immediate change.

        Essentially the real threat of a Black Swan is where things are consistent (i.e. no reason to suspect change) and sudden (i.e. no signals before the event to indicate change). That’s what happened when the black swan was found. It’s NOT what happened for 9/11 or Fukushima.

        In the case of 9/11 there were lots of weak signals. The problem was that they were ignored or not taken seriously. Even the idea of a plane flying into the twin towers had been considered in previous work. More importantly, Al Quaeda had stated that they were a target in 1993 when Ramzi Youssef had tried to blow up the buildings. Although the collapse of the towers was unexpected, the idea of a plane flying into the towers was predictable. Planes had flown into skyscrapers before. http://en.wikipedia.org/wiki/B-25_Empire_State_Building_crash Also there were lots of disaster movies about fires in such buildings – the Towering Inferno was one (http://www.imdb.com/title/tt0072308/synopsis) where the building was at risk of collapse. It’s not a big stretch to create a story combining the two (or other aspects e.g. a better car-bomb than the 1993 one, placed near a more strategic pillar?)

        Fukushima is more interesting – as apparently they’d anticipated earthquakes and the building was constructed to withstand this. They’d forgotten that these often are accompanied by tsunamis and so missed out protection for this. The best explanation on what SHOULD have happened here (of course said in hindsight) comes from Peter Schwartz’s book: The Art of the Long View.

        Schwartz writes “If the planners of Three Mile Island had written a stroy about how things could go wrong, instead of a numeric analysis of possible fault sequences, they would have been better prepared for the surprise they actually encountered when their complex machine went astray”. Substitute Fukushima for Three Mile Island and the story is exactly the same.

  4. Pingback: Competitive Intelligence | Pearltrees

  5. Very good observations. One elemental problem is that “surprises” are often the product of non-linear phenomenon where the initial conditions acting on an isolated dynamic produces butterfly effects. An examination for example of the street dynamics of Egypt, the sustained use of conspiracy theory to redirect public dissatisfaction and the organization of the MB produced AN EXPECTED result that was obscurded when viewing these elements in isolation. Understanding the “dynamic” might have resulted in a US policy that sent grain ASAP and advocated a delay in Mubarak’s departure until a new Constitution directing the promised election was created before the abdication of the former regime. Another point would be Lewis’ advice not to view behavior in terms of nation states implying weak signals are amplified by their resonance across state borders by value systems that hold religious ideals above nationalism. Without the mind testing hypotheses, non-linear data is unmanageable. It is our cognition that uncovers the strange attractors.

  6. Pingback: Knowledge | Pearltrees

Leave a comment