The Ripple Effect

Why You Need a BFC (Part 2)

Last week, I looked at some of the compelling reasons for transforming a set of commodity servers into a big flexible computer, or BFC.  At TidalScale, we call this a Software-Defined Server -- a single virtual machine operating across multiple nodes, and that makes all the aggregated resources available to the application. But for today’s blog, it’s BFC all the way.

Part 1 of this blog described the characteristics of a BFC, its unique advantages when it comes to flexibility, how a BFC is built to support resilience architectures. This is all part of a bigger-picture summary of its overall value: A BFC reduces the time it takes to deliver solutions and services, while lowering IT costs.

Another aspect of a BFC is the performance gain of sizing the server to your problem.

Performance: A measure of throughput and efficiency

When you create a BFC with TidalScale’s HyperKernel software, it makes sense to measure performance by throughput (work performed by the guest per unit of time) and efficiency (the effective utilization of the compute resources necessary for a given throughput).  For example, if a given workload takes 10 minutes to complete with 12% CPU utilization on a given system or cluster, one could then compare that same workload on a TidalScale hosted BFC.  That same workload might execute on TidalScale in 2 minutes with 72% CPU utilization, this would be a 5x performance gain with perhaps a 20% HyperKernel overhead (or tax). This tax is due to the work performed to migrate memory pages, CPUs or I/O devices to most rapidly process the workload. The bottom line here is that the increase in performance justifies the tax.

TidalScale’s ability to do this is based on P.J. Denning’s paper, “The Working Set Model for Program Behavior.” Since 1968, this paper has been acknowledged as the major foundation for the locality principle, and has been found to be effective for all modern computer architectures and applications.  Locality, which is the basis for any working set, has also been found to exist in many other informational, biological and physical systems.  TidalScale has extended the computational model for a working set to include CPU and I/O along with memory.  To deploy a BFC, we’ve mobilized these computation resources, and by applying the working set model, the TidalScale HyperKernel dynamically and automatically converges related resources to increase throughput efficiency. Working set convergence reduces latency to memory or I/O devices and increases your application’s ability to use processing power.

Production Ethernet is all you need

When talking about a high-performance system with a multitude of processors and resources, the question of network latency often arises.  Since there are many processors accessing many resources, it’s natural to assume that the interconnect can become overloaded.  Were this to happen, the overall system performance could be diminished.

TidalScale avoids this through the use of machine learning and dynamic workload distribution within the BFC. TidalScale’s patented algorithms analyze the stream of events coming from the workload and continually optimize the locality of processors, I/O handlers and memory. 

If you compare how we use the network to the way a memory bus is used, you’ll see a very big difference.  When a processor requests an un-cached address from memory, the memory has to respond with the data contained at that address.  In our case, often the better solution is to send the processor to the memory, so that subsequent references to that memory, and to its neighbors, do not need to use the interconnect at all!  The processor stays near the memory until the processor no longer needs that memory, or until that memory is moved to another node, or until the processor needs to be moved to another node.  We dramatically reduce the interconnect traffic by simply eliminating much of our need for the interconnect itself.

Who needs a BFC?

In-memory applications for databases, data management, information analysis and decision support are hungry for four things: processors, memory, storage and bandwidth.  A TidalScale BFC has all of these and can scale flexibly as needed.  Every node added to a BFC adds that node’s compute, memory and I/O resources to the Linux guest.  And when you add more memory and processors, you add more memory bandwidth. 

Hadoop, Mongo, SAP HANA and many other scale-out solutions are already ready to use a TidalScale BFC.  Right now, all of these solutions are plagued by the limits placed on memory and horsepower for any single node in the cluster.  TidalScale provides a cost-effective, easy and transparent way to scale up a single node or all of the nodes in an existing cluster.

PostgreSQL, Oracle and MySQL database applications are already multi-threaded and hungry for more memory caches and metadata storage.  TidalScale provides the ability to apply the terabytes of memory, hundreds of processors, and gigabits of bandwidth needed to satisfy increasing demand and provide a cookie-cutter approach to increased capacity.

Data center operators running hundreds of Linux instances, with hundreds of copies of Apache, and hundreds of copies of MySQL have a complicated task.  That approach, though common, is costly and energy inefficient. Yet the industry is moving in the opposite direction toward the simplicity and efficiency of Linux Containers (LXC). The good news is that TidalScale supports LXC today. In fact, you would almost think that TidalScale’s resource mobilization architecture was designed with LXC in mind. LXC on TidalScale gives operators a flexible, mobile, scalable and rapid deployment solution for almost any application need.

Though virtualization is no longer new, TidalScale does bring something new to virtualization by extending the reach and capabilities of an operator’s virtualized instances. Today, data center operators are either constrained by the size of “sweet-spot servers (2P)” or they must deal with operational cost of deploying 4P or larger systems. With a machine image on TidalScale, an operator could provide instances at whatever size is needed -- with hundreds of processors and many terabytes of memory, enabling a valuable competitive advantage for the operator, along with new service capabilities and increased value for customers.

Whether it’s modeling, electronic design automation, bioinformatics, genomics, reverse time migration in the oil and gas industry, or something new altogether, today’s data-driven applications need more memory, more processing, more bandwidth or all three. And this makes a TidalScale BFC the logical answer to an ever-growing list of resource questions.    

Learn more about TidalScale Software-Defined Servers.

Topics: big data, software-defined server, in-memory performance, infrastructure

gartner_cool_vendor_2017.jpg
IDC-Innovator-Logo-2017.png
TidalScale featured in e-week!
rhglobal100logo_360.png
Gary Smeardon in the Cube Interview