TidalScale News

Composing a New Future for the Data Center

The world is moving – hurtling, really – toward composable, on-demand infrastructure.  Or hyperconverged.  Or virtual.  Take your pick. All these terms more or less describe the same thing.

At TidalScale, we prefer to describe dynamically configured infrastructure as a “software-defined data center (SDDC)” because that term does the best job describing the essence of what happens when you virtualize servers, storage and networks into pools of data center and cloud resources. They all become software defined. In fact, as a recent article in The Next Platform notes, “These days, ‘software-defined’ has become the adjective of choice for infrastructure.”

Software is (or should be) the active ingredient in this prescription. From what we’ve witnessed, there’s no real need for exotic hardware design, proprietary interconnects or elaborate workarounds. These simply add to your capital and operating expenses, reduce operational flexibility, and give your incumbent IT vendors another hook to keep your business for years to come.  

This is particularly relevant for memory-intensive use cases. In the article I link to above, a Dell EMC executive complains that today’s approach to composability is I/O-centric, and that the I/O-composable world sits behind a protocol stack. From the perspective of most hardware vendors — particularly those that sell servers and storage – that’s true. To them, DRAM is a “trapped resource.”

I suppose if I worked for Dell, I’d feel the same way. Today, even as storage and networks have become software defined, servers remain fixed resources in traditional on-premise or cloud data centers. A single server can only accommodate so much memory. Purchasing larger servers is costly, and waiting for funding delays insight. Meanwhile, distributing applications and workloads across clusters requires rewriting code, which brings its own costs. Too often, this forces users to shrink the size of their workload to fit their available resources – a problematic approach at a time when workloads are only growing larger and more complex.

For vendors counting on yet-to-be-invented technology like Dell’s PowerEdgeMX offering, the answer is in network I/O—even as they point out, correctly, that the necessary shift is toward “memory-centric, in-memory computing.” That’s why they are betting on industry efforts like Gen-Z, a high-speed interconnect fabric that would allow an OS to send memory where it’s needed.

Gen-Z just published its 1.0 specification. It will be some time before Gen-Z products reach the market—likely around 2020 before the first products roll out.

Meanwhile, data sets continue to grow, and workloads become less predictable by the day. Many data center administrators can’t afford to wait around to see what comes of the Gen-Z effort or other future composability solutions.

They don’t have to. Breakthrough composability options already exist today. These solutions deliver the final piece of the SDDC puzzle—Software-Defined Servers that can be configured in less than five minutes to meet virtually any workload, including and especially in-memory workloads. And it’s all accomplished with commodity servers and interconnects. 

The modern, composable data center is inevitable. But you don’t have to wait to make it part of your future. In fact, you can build your own today and you can do it with the hardware you have.

Topics: software-defined server, software-defined data center, composable infrastructure, Memory, DRAM

gartner_cool_vendor_2017.jpg
IDC-Innovator-Logo-2017.png
TidalScale featured in e-week!
rhglobal100logo_360.png
Gary Smeardon in the Cube Interview