Breakthrough flexibility & performance in third generation of TidalScale software GET THE DETAILS >

The TidalScale Blog

    Simulation Applications on TidalScale


    TidalScale, Powering Software-Defined Servers:

    Executing and analyzing large, complex models pose unique computing challenges. And these challenges are only growing as companies face the need to process, analyze, and act on ever-increasing amounts of data. Typically, IT professionals are faced with two options: scale-up by purchasing extremely expensive specialized computers, or scale-out by rewriting applications using complex distributed algorithms for running on clusters of standard hardware. One costs money, the other costs time. And in today’s budget-conscious, real-time world, few organizations can afford either. Software-Defined Servers, an emerging category of the scalable computing market segment, offer a way to tackle large data challenges using already available commodity hardware. TidalScale Software-Defined Servers combine the simplicity and performance of a scale-up solution with the price/performance value and deployment flexibility of scale-out. By giving organizations an easy way to scale large data problems across virtually any population of commodity servers, TidalScale Software-Defined Servers simplify and speed the processing of big data while giving customers flexibility on how they use their hardware resources.

    The TidalScale HyperKernel scales-up the execution of unmodified applications beyond the limits of a single physical server. TidalScale’s Software-Defined Server doesn’t require any changes to operating systems or application software. The result: improving application performance and achieving insights sooner without the expense of scaling up or the time lost to traditional scale-out efforts.

    Simulation Customer on TidalScale using AnyLogic:

    TidalScale Software-Defined Servers eliminate traditional limitations on simulation applications and enable exploratory analysis to scale without changing the models, tools, or operating environment. On TidalScale, simulation applications can run larger-scale and finer-grained simulations more easily than previously possible.
    lanl_climate_sim2.jpg
    TidalScale worked with a large financial institution to run an _AnyLogic_ model of customer behavior. The model was run on an unmodified Linux version 7.2 running on a TidalPod with 3.5 TB of memory (thus running transparently across five physical servers). For the first time, the customer was able to run their simulation at a 1:1 granularity level and perform sensitivity analysis on te outcomes. TidalScale had a dramatic impact on three of this institution's most common usages of Anylogic:
    • Model Granularity – A simulation is run to model the outcome of a specific decision. Increasing the agent count increases the confidence of the result but is directly correlated with memory usage. TidalScale Impact: Able to scale agent count into the tens of millions for more accuracy in high-impact business decisions.
    • Sensitivity Analysis – The input parameters to a model are varied while the model is invoked repeatedly. This vets the sensitivity of the model to input assumption. The agent count, concurrent runs, and iterations can scale in both memory and cores.TidalScale Impact: Eliminated limitations on the number of concurrent runs and scale. Decreased the time needed to validate a model.
    • Optimization – Repeatedly invoke a model to locate the inputs needed to achieve the desired outcome. The scaling factor, number of inputs optimized for, and iterations all scale in both memory and CPU resources. TidalScale Impact: TidalScale allowed optimization across ALL of the inputs rather than some. 

    For more details see AnyLogic on TidalScale.