Yesterday’s post (Strategy: How Organizations help us Manage Uncertainty into Risks) explored how measurement and organizational forms help us translate much of the uncertainty we face into risk. We have many more tools for managing risk (variability controls, exception making principles, etc.). Today’s post will look at how the practice of strategy does this too.
In one sense strategy work brings new uncertainties into our lives. To the degree that a new strategy moves into areas that are new to us we are introducing new sources of direct uncertainty. By direct, I mean we are now having to face what is somewhat unfamiliar as our new activities will impinge on these sources of uncertainty. We can even say it is the reverse, as we are now closer to certain sources of uncertainty by our new activities they are more likely to come into play for us over time.
The practice of strategy is to translate what are the new sources of uncertainty into areas of risk. How? By the process of crafting priorities, objectives, strategies and implementation action plans.
Setting priorities, in part establishes which sources of uncertainty we recognize (hence they are Grey Swan like sources of uncertainty) and determine we need to give them specific attention. Any positive play (e.g., It is critical that we become the leader in “X”) also carries with it an implicit “defensive” play which are our priority’s sources of uncertainty. Setting objectives is the process of putting us in positions of strength or at the very least to reduce areas of weakness to irrelevancy. The very notions of strength and weakness embody the notions of opposition – these are, in more specific sense, our recognized sources of uncertainty.
The setting of strategies are the game plans for how we choose to execute and achieve our objectives. Often we explicitly incorporate defensive moves to reduce the impact or likelihood of an uncertainty event thwarting our achieving our strategy.
It is not uncommon, somewhere in the strategy setting process to do some form of size-up o our position relative to achieving or strategy objectives. This process is one of explicitly or implicitly clarifying how we would fare if we faced identified potential difficulties. The reader will note that I have qualified sources of uncertainty by “identified” and “recognized”. This equivocation is deliberate. Unless we engage in an analysis and conversation about our proposed strategies sources of uncertainty, the result will be problematic about how aware we are of the more likely and/or serious threats to achieving our strategy. I would be interested in how others set this conversation up with their clients.
The process for translating uncertainty into risk like situations is based on the following sorts of cascading inquiry:
- What could derail our strategy (as we move forward, and when we get their), how and why?
- Are we clear about what the threat is specifically?
- So what? Is this a matter of inconvenience, significance, survival, etc.?
- Do we have a sense of its likelihood? How useful (can we materially use it) is this sense?
- What can we learn about it that can help us?
- Does it appear to be random in its occurrence?
- Is there a cause and effect theory of how it occurs?
- Can we benefit from knowing more about it?
- Can we learn more and use this knowledge more effectively than those who compete with us and face the same sources of uncertainty?
- Who else would care that we win, lose or draw and would this shift over time (or as we progress to achieve our strategy)?
As we can see, as we traverse these kinds of questions, we develop a better sense of we can “manage” for its occurrence. For example, if we have a workable theory of causality, we can monitor for its growing likelihood (even if we don’t really have a great handle on likelihood). This is a leading indicator activity. This provides us with time that we hopefully can use to somehow mitigate the event’s impacts and/or likelihood.
The notion of usefulness in a notion of likelihood is an interesting issue. Let us say we know enough about a potential event that we can say that it is (based on history) a once in a lifetime’s event. Whose lifetime or what’s lifetime. If this was a technical issue (involving engineering) the response would be to increase some aspect of the structure or system so it could safely survive the event. If it was a rarer, but more catastrophic event we may invest in leading indicator monitoring.
However, I was taken by an article I read (Financial Post Magazine, September 2009, pageFPM37) that suggested how such a notion of likelihood is almost irrelevant:
“The problem with financial markets is that once-in-a-lifetime events happen roughly every four years” (quote attributed to Al Kellett)
The notion of lifetime clearly does not relate to a human lifespan notion. So what lifetime is being referred to? We can talk to traffic safety engineers and they can tell us that at a given road location the odds of being it by a vehicle is “X%”. Again how useful is this information if crossing roads is something you might do? Why is this value virtually useless? Because over the course of time traffic patterns may vary so greatly that the so called average is a statistically meaningless piece of information. In reality, you would monitor the traffic conditions in the moment and decide on your “chances” and how might reduce them further (perhaps go to a midpoint of safety such as a median) then.
It is because I am sceptical how often any metric around likelihood is truly useful that I hold the view that we live in uncertainty like universe that from time to time is sane enough to be risk like.
The art of strategy is to help us manage our exposure even when we don’t have strongly effective notions of likelihood. If the event is thought to be random and the consequences of occurrence are significant. I suggest we look for insurance (i.e., find another fool to take over the uncertainty and hopefully not charge us too much for doing so). And if we can’t should we really be in this game? There is a macabre option: always go into a situation of danger with a companion (e.g., two hunters looking for bear), one who is even less equipped to escape the event’s consequences. Perhaps, if they are caught this gives you the time needed to escape.
What makes something uncertain is the fact that we are missing instrumentally useful knowledge about the event. This is why having a useable cause and effect theory about the event is so helpful. This is why useful investments in learning more about the event, its attributes and dynamics can be so useful – we learn enough to anticipate it when it looks like it could be occurring, and/or, we get a better handle on how severe the event may actually be (sandbag around our house or pack up and leave).
Perfect knowledge is not the aspiration, it is enough knowledge to have lead time and to know how serious it could be. The practice of crafting strategy would I believe be better served by investing the time and effort to explore facing uncertainties.