Faster evaluation in Analytica 4.6

As we celebrate the release of Analytica 4.6 this week, I thought I would share the results of some benchmark timings that I carried out to test how the evaluation speed in 4.6 compares with the previous two Analytica releases.

As with every release, there have been numerous changes that we hope speed things up on average. In Analytica 4.6, there have been several examples of this, including these

  • An improved algorithm for evaluating the Iterate function when it appears within a Dynamic loop (i.e., a convergence at a particular time step). In this specialized situation, this changes the complexity of the overall evaluation.
  • An ability to detect certain cases when iterated slices will end up not being in the final result, allowing computation of those slices to be skipped. 
  • Internal improvements to memory allocation and deallocation, resulting in a generalized speed-up across array operations, especially those involving arrays containing only numbers and null.

Although these sound awesome on the surface, in a codebase that is already as highly optimized as Analytica, every change that improves one situation almost always impacts other situations negatively. So the important question is how well (and whether) the changes improve speed on average over the situations that arise in substantial real-life models.

To measure this, we measure evaluation speed across several large models that Analytica modelers have built. These benchmarks were chosen independent of the enhancements in 4.6 (with one exception -- the Iterate in Dynamic enhancement mentioned above was motivated specifically by the RP1 application. I exclude this one from the averages). All these models are models that make use of a wide breadth of Analytica features and carry out a large amount of computation. To protect the proprietary nature of several of these benchmarks, I have assigned cryptic 3-character names to each. All results are measured in elapsed clock time, all measured on the same computer (Intel Core i7-2600 @ 3.4GHz, Windows 7) with no other applications running and all using 64-bit editions of Analytica.

 

Benchmark Elapsed time (Seconds) Speed-up
Analytica 4.4 Analytica 4.5 Analytica 4.6 4.4 to 4.6 4.5 to 4.6
AM1 27.78 26.74 22.44 24% 19%
AT1 25.59 23.85 10.09 154% 136%
CA1 18.59 15.53 12.41 50% 25%
CE3 66.67 64.98 67.03 -1% -3%
ES1 69.00 61.65 66.52 4% -7%
KI3 14.98 15.61 13.75 9% 14%
RP1   3751 39.85   9268%
PO5 0.147 0.216 0.178 -17% 21%
SE1 102.5 96.4 70.01 46% 38%
SS1 51.38 49.83 11.02 366% 352%
Average: 71% 66%

The RP1 result was not used in the average, since as mentioned earlier one of the enhancements was created for this model and obviously would bias the average in a misleading way. But it is an excellent benchmark model for benchmarking future releases. The evaluation times involved for the PO5 model give a false the impression that this is a small model. In reality it is a substantial model in the same ballpark as the other benchmarks listed here. It contains 980 variables and several arrays and array operations involving hundreds of thousands of cells. It is probably also the most highly optimized-for-speed model I've encountered, optimized originally for Analytica 3.1, with few opportunities for improved algorithms, which accounts for its extreme speed.

The percent speed-up was calculated to compute how much more work Analytica 4.6 carries out in a fixed amount of time, computed as the ratio of timings minus 1. A 25% speed-up means that 4.6 completes 25% more computation per unit time. You can convert these to a fraction and add 1 for the "fold-speed-up" -- for example, the AT1 result of 136% shows that 4.6 was 2.36-fold faster (meaning it performs a little over twice as much work in the same amount of time). 

There are many operations that can be benchmarked -- model load time, parse time. user-interface operations, time to produce a graph, data import or export time or bandwidth, etc. These benchmarks are all selected to measure only model evaluation time, beginning after a model has been fully loaded into memory.

The substantial fluctuation in the percent speed-up highlights the fact that the amount of speed-up varies greatly by model. I selected these models to be a good representative sample of real-life models, which I hope reflect an average as objectively as is possible. Based on these and with this caveat, the conclusion here is that Analytica 4.6 evaluates models about 66% faster than the previous release.

» Back

Lonnie Chrisman

Lonnie Chrisman, PhD, is Lumina's Chief Technical Officer, where he heads engineering and development of Analytica®. He has authored dozens refereed publications in the areas of machine learning, Artificial Intelligence planning, robotics, probabilistic inference, Bayesian networks, and computational biology. He was was in eighth grade when he had his first paid programming job. He was awarded the Alton B. Zerby award "Most outstanding Electrical Engineering Student in the USA", 1979. He has a PhD in Artificial Intelligence and Computer Science from Carnegie Mellon University; and a BS in Electrical Engineering from University of California at Berkeley. Lonnie used Analytica for seismic structural analysis of an extension that he built to his own home where he lives with his wife and four daughters: So, he really trusts Analytica calculations!

4 Comments

Leave a Comment