One common misconception that I was guilty of subscribing to is that while qualitative risk analysis always has some degree of subjectivity, quantitative risk analysis should remain strictly objective. My school of thought has since shifted that there is no such thing as truly objective data with which to conduct such an analysis, because if nothing else, even if the measurements are truly reproducible and repeatable, there is an observer bias that is injected into the mix. Rather than fighting the subjectivity, it’s much easier to just accept it. I know it sounds like a terrible idea, but bear with me.

In a previous post, I described pulling marbles from a bag using a Bayesian technique to develop an estimate. I actually had a few people get in touch with me about how ridiculous it is just automatically assume that there is an equiprobability of marbles without any prior knowledge as to the contents. Lacking any prior knowledge or experience, we are using a Bayesian concept of uninformative prior which gives us a general starting point. This concept demands indifference, and until we find out otherwise we will assume that there is an equiprobability.

But what if we had prior knowledge or experience? If I spent my youth playing marbles, perhaps I could tell you that a bag of that size and weight likely contains about 20 marbles. Could we use this to our advantage? Absolutely. We can now reasonably assume that the red marble will be drawn at a minimum of 5% of the time. This is what is often referred to as a priorpriori, or informative prior. But that’s just a minimum, we could be facing a significantly higher percentage. However, an incomplete data set is what drove use towards a Bayesian technique in the first place. We now have the option of either using the informative or the uninformative prior for our first round of testing.

This is something that pains people to discuss, because we are now possibly considering 5% likelihood as the low end of the spectrum based upon my personal experience. Before we grab the pitchforks and torches, let’s remember that expert judgment is regularly called upon throughout planning various aspects of a project and that human input remains important. This is not to say that we should not debate the veracity of this estimate, because we should! But let’s not debate the source, let’s not argue solely because it was based upon a person’s opinion rather than empirical data.

At the end of the first test, when we draw the first marble and record the result, we have our first posterior. However, when we go to run our test again, we will change the equation that we use to calculate probability. We will now have a new likelihood for drawing a red marble, as the posterior from the previous experiment becomes the next experiment’s prior.

As further data is collected, our calculations will continue to evolve and with this additional data we will develop refined probabilities. Here in lies the argument that people have against Bayesian methods for risk management in a predictive life cycle project: if planning is done upfront, how can appropriate plans ever be completed if the results of risk management activities continue to change? The problem becomes one of attempting to conduct planning in a vacuum rather than the methodology we are using for risk.

When the facts change, I change my mind. What do you do, sir? – John Maynard Keynes

It goes without any great argument that planning within a project is iterative and ongoing. In fact, project managers regularly engage in what is called progressive elaboration where a plan is regularly refined as more information becomes available. Risk management is no different. One of the processes invoked is Control Risks, where risks are supposed to be reassessed at a frequency determined within the risk management plan. This reassessment is supposed to determine if a shift in probability and/or impact has occurred since last assessed.

This type of reassessment should be intuitive for most people. I have lived in New Jersey for a number of years, and it gets some great weather – especially in the winter time. When we receive word that we are going to have a major snow storm, I try to check the weather every 3-4 hours to see if it’s still coming towards us and how much snow we are going to get. It’s a running joke that if they call for the storm of the century, we’ll get flurries; but the opposite holds true, as well. Think about how unreasonable it would be for me to watch the news once and tell the kids that they don’t have school next week!

Risk is uncertainty. The thing that hurts most projects is that we try and turn that uncertainty into certainty, which just does not work. I embrace everything in terms of likelihood of occurrence based upon probability developed from data. There’s an 80% chance of snow? I like to tell my kids there’s a 20% chance that they’re going to school. Risks can be managed, and we can do our due diligence to gather as much data as possible.


Karl Cheney
kcheney@castlebarsolutions.com
Castlebar Solutions

3 thoughts on “Subjectivity and Risk Scoring

  1. I agree with your methodology. What cripples most projects is that they do not follow an iterative process when it comes to managing risks. They come up with a Risk Register at the outset of the project which lists basic, underdeveloped risks and responses and do not consider residual risks or the fact that re-assesment is needed throughout the life of the project.

    Bayesian concept of uninformative prior is similar to Kant’s “a priori’ knowledge. While Kant deals with philosophy the similarities are there. The biggest takeaway is that Kant states that knowledge gained a posteriori is not absolute because the possibility of encountering an exception is there. It’s the same with risk. There is the possibility that, despite your previous knowledge, the results can deviate from what you’d expect. This is why iterative risk planning is important because, like you mentioned, risk is uncertainty. Despite what you may or may not know, you have to plan for it but gathering as much data as possible.

    1. Yamilec, great points across the board. I think you definitely think along the same lines I do: risk is uncertainty. Acting like quantification changes what it inherently is leads to trouble. If we keep an eye on identified risk and watch as they evolve, we will always be in a better spot to deal with them as they do.

Leave a Reply