The Ripple Effect

9 Ways to Press the Easy Button for Scalability

In some recent blogs, we covered eight reasons why Software-Defined Servers can help reduce OpEx and CapEx, while helping data center managers extract maximum use and value from existing IT resources. And last week, we illustrated how you can achieve some startling real-world performance gains by implementing Software-Defined Servers.

Today, let’s look at how simple, straightforward and transparent Software-Defined Servers are. 

In fact, I like to think of Software-Defined Servers as the easy button for scalability– a far less burdensome alternative to traditional scale-out schemes that require extensive recoding or sharding of large data sets, and a much more desirable approach than shrinking the size of your problem to fit your available computer.As you may know, Software-Defined Servers allow you to scale your computer to match the size of your problem, completely on demand, by combining multiple commodity servers into one or more virtual servers. You gain access to all the resources associated with those servers – cores, memory, I/O, etc. – and your software sees them as a cohesive whole and not a distributed array of disparate but networked components.

Here are nine ways Software-Defined Servers allow you to scale without the headache, expense or time investment of traditional scaling approaches:

1. You can stand one up in minutes. Unlike with traditional clusters, you can mount and boot a Software-Defined Server in a few minutes. (And when we say “minutes,” we don’t mean just coming in under an hour so we can justify using the m-word. We mean closer to 10.) Here are the steps.
  1. Cable the systems you want to combine into a virtual server using industry standard interconnects.
  2. Choose the systems you want aggregated into your Software-Defined Server. Once selected, you’ll see all the resources your Software-Defined Server will use as if they were installed on a single server. In the background, TidalScale automatically and transparently establishes a virtual LAN connecting the nodes via 10Gb Ethernet.
  3. Boot the nodes.
  4. Select and boot your Guest OS.

Once the Guest OS is booted, you’ll be working with a Software-Designed Server that your application thinks is one system. No coding to a parallel framework, no StarCluser configuration files, none of that. And all your application sees is one system configured as you want it.
2. Reconfigure your server just as quickly. When one big problem is done, disassemble your virtual server and stand up another one up, configuring it exactly as you need it for the problem at hand.
3. Configure nodes to meet varying workloads. Say you have 20 servers, each with 4 cores and 512GB of memory, at your disposal. One option would be to create a single Software-Defined Server with 80 cores and 10TB of memory. Some problems are suited to that configuration. But some might require more memory than processing power, or vice versa. For instance, you can boot an 8TB Software-Defined Server with 20 cores, and a 60-core server with 2TB of memory. By creating multiple nodes – at TidalScale, we call them TidalPods – you can run multiple workloads at once across your inventory of servers, instead of waiting until one workload is completed and then sequentially starting the next.
4. There are no code changes, ever. When you create a Software-Defined Server using TidalScale’s HyperKernel software – which works between your bare metal, commodity server hardware and whatever OS and applications you’re running – both your OS and applications run natively on the resulting virtual machine. We could go on, but frankly it’s that simple. You run your OS, apps and models completely without modification. It just works.
5. Constantly monitoring and optimizing cluster resources is no longer a thing. Running data science tools like Python across clusters requires constant monitoring and optimization, making analysis an incessantly labor-intensive process. For data scientists trying to distribute their problem across clusters, former TV pitchman Ron Popeil’s anthem to “Set it and forget it!” may fall on the ear like a cruel taunt. But TidalScale makes setting and forgetting a reality, because TidalScale’s Software-Defined Server technology is self-optimizing. TidalScale uses advanced machine learning to mobilize all resource types, enabling them to flow around the system like the tides. Interrupts, memory, cores and other resources automatically flow to where they are needed most within the virtual system, improving performance and productivity – and relieving users of the headache of making sure the system is operating at its peak.
6. More users, less training. Chances are, not just anyone in your organization has the expertise to run Python across clusters. But with TidalScale, virtually anyone, with very little training, can stand up a Software-Defined Server and run their modeling or analytics problem.
7. No need for specialty hardware or HPC platforms. HyperKernel is designed to work with industry-standard commodity hardware and Ethernet interconnects – the kind of resources you’ll find in virtually every data center. No proprietary platforms needed here.
8. You get answers sooner. Would your job be easier if you received answers sooner? How about 300X sooner? Yeah, we thought so.
9. You get to focus on your real job. All this means data scientists and analysts can focus on data science and analytics, not only how to get their code to run across clusters.

Scalability doesn’t have to be difficult or expensive. With Software-Defined Servers, you can quickly and easily scale well beyond the limits of a single system – and on resources you already have in house. If that isn’t easy, we don’t know what is.

Discover how TidalScale can help you press the easy button for scalability.

Take TidalScale for a Test Drive

Topics: Multiprocessor, TidalScale, in-memory, big data, software-defined server

TidalScale featured in e-week!
TidalScale Red Herring 100 Winner
Gary Smeardon in the Cube Interview