Tracking uncertainty in the input data in a model and showing how different degrees of such uncertainty affect the final result are strengths of Analytica. However, uncertainty also arises in other dimensions than the input data alone. Uncertainty analysis extends across different domains accordingly. The functionality in Analytica can nevertheless help to converge on the truth in many of them.
Your data input may vary over different ranges, which may be open ended, semi-open ended or closed. Analytica lets you handle all three variations in an uncertainty analysis. It also gives you a wide range of different probability distributions corresponding to different situations in the real world: for instance, normal for biological characteristics such as weight, lognormal for investment returns, gamma for combinations of exponential distributions and so on.
The parameters you include in the model may or may not adequately reflect the situation in the real world. While only users or modelers can determine the basis for a model, Analytica provides tools for a structural uncertainty analysis too. The influence diagram provides a rapid overview of the factors that have been built into the model and the existence of relationships between them. Sensitivity and importance analysis then allows factors with only minimal or negligible influence to be removed, simplifying the model and leaving room for other parameters to be added and evaluated in the same way.
The underlying mechanisms at work in the real world are often complex. Each parameter in the model is related algorithmically to at least one other parameter or to the final result. Once again, the perspicacity of the modeler determines the quality of the algorithms used. Analytica facilitates the modeler’s work and uncertainty analysis by allowing each parameter to be qualified with meaningful names and easily inspected for its relationships or dependencies via the influence diagram.
Where uncertainty is a lack of information about input data, variability is instead a description of the diversity of those data. This difference means that uncertainty is often underestimated and variability overestimated when the focus is solely on random error in the data (without taking into account measures of statistical variance, for example). On the other hand, the degree of variability can be a source of uncertainty and therefore subject to uncertainty analysis.
Stabilization of uncertainty
As uncertainty is propagated through a model in Analytica, increasing the number of Monte Carlo runs of a model will cause the final aggregate uncertainty model to stabilize more or less quickly: meaning you should know sooner or later how uncertain the outcomes are. Stabilization of uncertainty analysis can be expressed in terms of the convergence to a certain level for the standard deviation of the outcome, for instance. It will depend on how you have defined the algorithms in your model.
Too much simulation is better than too little
To ensure that measures of uncertainty in outcomes stabilize realistically, do more simulations than you think may be necessary. The integrated Monte Carlo functionality in Analytica means that such simulations remain relatively fast.
If you’d like to know how Analytica, the modeling software from Lumina, can help you to apply many kinds of statistical and uncertainty analysis, then try a free evaluation of Analytica to see what it can do for you.