Most data center managers – and even many end users – are familiar with Software-Defined Networking and Software-Defined Storage. These battle-tested approaches to virtualizing existing assets make it easier for resources to zig when workloads zag. They introduce significant flexibility into the data center, which is a win for practically everyone involved.
But one piece has been conspicuously missing from the software-defined puzzle: the server. And as missing pieces go, it’s a big one. After all, don’t servers represent the core of the data center – the very engine around which all other resources and systems are built?
With data volumes growing and workloads hungering for more cores and memory, anyone struggling to keep up with big data or simply trying to address the needs of an array of applications probably could use some help on the server end. For instance, when problems are too large for an existing system, most users either rewrite applications to run across clusters, or they parse their memory-intensive big data workloads into pieces as a cumbersome workaround to running it all in memory. And when the fluid demands of a growing enterprise put new pressures on data centers, the usual answers – namely, scaling up via costly hardware investments – can rarely be counted on.
Enter Software-Defined Servers – the missing piece of the software-defined data center. As its name might suggest, a Software-Defined Server uses software to combine multiple physical computers into a single virtual system. The virtual system spans from one server to many, aggregating all the resources of the combined computers, including memory, CPUs, networking and storage. And all those resources are available as part of one large, fast, flexible server. So if you have 40 different servers across your data center or organization, and each is equipped with 512GB of RAM, you can combine them into a single Software-Defined Server and the result is indistinguishable from an actual physical server equipped with 20TB of memory. As far as the user, the application, the Operating System and the data are concerned, it’s all one computer. Suddenly the large-scale analyses that previously slowed your progress no longer overwhelm the computer; rather, you size the computer to the problem and insights emerge sooner.
Software-Defined Servers aren’t just fast, however. They’re flexible. In fact, you can arrange your population of nodes in any configuration that gives you the cores, memory and I/O you need to solve the problem at hand. Given the example above, you can create one system with 12TB of memory and four others, each with 2TB. Mathematically speaking, you have thousands of options at your disposal, delivering the kind of operational flexibility data center administrators have long been searching for but failing to find.
At TidalScale, we draw a hard line around about what truly constitutes flexibility in a Software-Defined Server. We believe it should conform to the industry-standard ethos found with other software-defined solutions in the data center. That means running on affordable, industry-standard hardware and operating systems, and working across standard Ethernet. No special hardware or proprietary interconnects here. For IT managers looking to extract the greatest return from their existing resources – while avoiding costly vendor lock-in – Software-Defined Servers represent a solution that even finance managers can get behind.
On top of it all, Software-Defined Servers are also easy. Choose the right Software-Defined Server platform, and you can achieve all this on commodity hardware and without a single modification to your application or your Operating System. Not one.
Together, the benefits of flexible, fast and easy are transformative. Imagine no longer waiting for results due to rewriting code, and no longer delaying insights by downsizing the problem to fit the constraints of your memory. And picture the impact on your budget: You can, entirely on demand, create a server large enough to solve your biggest big data challenges without having to purchase a single piece of new hardware, a single byte of new memory. Then do it all again, entirely on demand, to solve the next problem. (Watch this blog in the coming weeks as we explore each of these benefits in detail.)
How we do it. TidalScale’s HyperKernel software is the key to creating high-performance, cost-efficient Software-Defined Servers on demand. It works between a standard OS and industry-standard, bare metal hardware. It then virtualizes processors, memory, networking and storage, and presents them so the guest system will execute as efficiently as possible. The TidalScale HyperKernel uses machine learning technology to react to guest system activity, reconfiguring physical resources with increasing effectiveness so processors that access the same memory blocks will be on the same node and share the same caches as much as possible for as long as they continue to interact. It does this by rearranging the physical location of memory, processors and I/O devices on the fly based on the observed working set of the current workload. This approach to virtualizing and mobilizing resources ensures the highest performance possible by efficiently pooling resources where they’re needed most.
For data scientists, data center managers and even C-level executives, Software-Defined Servers change the game.They bring agility, allowing you to dynamically and economically scale the size of your computers in lockstep with your current and future computing requirements. They allow the most memory-dependent applications to run entirely in memory and well beyond the limits of a single physical system – boosting performance and productivity in ways that previously were possibly only with traditional (and costly) scale-up methods.
For businesses needing a flexible, performant and easy way to accommodate spikes in workloads or future growth, Software-Defined Servers really do offer the missing piece of the puzzle: on-demand scalability with no hardware investment.