If a model is to yield a result, it needs data. Data are described by a data model, sometimes very simple (‘year’); sometimes more complex (energy consumption by type of energy, consumer category, geographical area, month, year and other attributes); and sometimes more difficult to ascertain (unstructured text, for example – by keyword?). However, the nature of the data model has an impact on the model itself, including the speed at which results can be calculated and the ease with which it can be modified. Here are some of the more common data modeling traps to avoid.
Image source: wikimedia.org
No well-defined aim
If you don’t know where you’re going, any road will take you there. If the goal of your model itself is unclear (model trends in energy consumption? In energy popularity? In energy availability?), your data modeling is likely to be unclear as well. Vice versa, if your data modeling is unclear, it will be more difficult to obtain meaningful results from your model. In particular, attributes that are ill-defined or even left to user interpretation are invitations to confusion.
What people say they want in a model and what they in fact need may be two different things. Modelers, like physicians seeking causes behind symptoms, need to go beyond model descriptions to understand the business rationale. Taking model specifications too literally may lead to an unsuitable model using misaligned data modeling.
Speculative data modeling
If you don’t need particular attributes or relationships in your data modeling now, resist the temptation to put them in ‘just in case’. As XP puts it, ‘YAGNI’ – standing for ‘You Aren’t Going to Need It’. In any case, Analytica makes it easy to change your data modeling later if your business model changes. Speculative data modeling is therefore all too likely to be wasted effort.
Huge data models
Some business models are complex. Others cannot easily or reasonably be reduced to simpler forms until you’ve tested the model for sensitivity to different factors. However, data modeling that involves too many objects or tables and their attributes will penalize visibility and performance. Remember Occam’s Razor: use no more objects and attributes than those necessary to explain something or, if you prefer, the simpler model (out of two equally plausible ones) is often the better one.
An affliction of spreadsheets, cryptic or meaningless names in data modeling make for obscurity and errors. Trying to keep track of just hundreds of cell references (B2, A34, P409 and so on) in a spreadsheet model is practically impossible. Spreadsheet applications are not designed to let you define real names for that many variables. Analytica on the other hand is designed for just that. And it also automatically updates formulas using those names if you change them at a later stage (unlike spreadsheets).
If you’d like to know how Analytica, the modeling software from Lumina, gives you visibility throughout your models and your data modeling, then try a free evaluation of Analytica to see what it can do for yo