I used to be uncertain, but now I’m not so sure. Whatever the cause, uncertainty is a fundamental part of the real world and has to be dealt with. By way of a definition, uncertainty is an unknown that is the same for everyone. An example would be the possibility of a storm hitting New York next week. Another would be the possible variation around a projected trend in a company share price. When several factors used in a model have inherent uncertainty, how those uncertainties combine defines the uncertainty of the outcome. So what should you do when measuring uncertainty for each factor – add uncertainties, multiply them or handle their propagation in another way?
Uncertainty for all, risk for some, error for none
Whether or not the uncertainty you see translates into risk that you bear depends on your personal involvement. If you are not in New York if or when a storm hits the city, then there is no risk to you of getting wet or being struck by lightning. If you have not invested in a particular company, the evolution of its share price may leave you indifferent. If you have invested however, you are now affected by risk. You’d like a positive risk of the price going up – unless you shorted the stock, in which case a rising price represents negative risk. On the other hand, error is not uncertainty: it’s just error, as in somebody fatfingering a calculator or misusing a spreadsheet application in an attempt to map out the future share price.
Looking for certainty
In a ‘glass half full, glass half empty’ style change of perspective, we might prefer to look for more certainty instead measuring uncertainty. For a series of measurements in a scientific experiment for instance, standard deviation measures the degree of certainty that an additional measurement will fall within a certain range compared to the average of all the measurements so far. In fact, there will be about a 70% probability that the new measurement will be no farther away than one standard deviation’s length from the average.
How exact can you be about uncertainty?
One scientific approach to uncertainty is that it cannot be quoted to more than one significant figure, for example ±0.04 of an inch or of a dollar. When uncertainty starts with a one, the rule is sometimes adjusted to allow for two significant digits, as in ±0.013 of an inch). The measure you are making (length, share price) should also be rounded to the same decimal place as the uncertainty you have identified: for instance, 64.7 dollars ± 2.3 dollars.
Combining uncertainties in simple models
The examples above have assumed that uncertainty follows a normal distribution. Measuring uncertainty in such cases for combining measures to produce a final result usually means adding the uncertainties to get a handle on the overall level of uncertainty. As a simple example, measurements of the height, width and depth of a rectangular object where uncertainty for each measurement is say ±0.02 inches would yield the following: volume = height x width x depth, with overall uncertainty = ±0.06 (±0.02 + 0.02 + 0.02). But what if the uncertainties do not follow normal distributions?
Modeling the propagation of different uncertainty distributions
When uncertainty is better modeled by using other types of probability distribution (for example, lognormal distributions for certain financial markets), Analytica allows a different distribution to assigned to each factor in a model. When simulation of the outcomes of the model is then done using different values drawn from the different distributions, the resulting uncertainty (whatever it turns out to be) takes into account the non-normal nature of the different uncertainties.
If you’d like to know how Analytica, the modeling software from Lumina, can help you manage combinations and propagations of uncertainties, then try a free evaluation of Analytica to see what it can do for you.