Every day, more data center administrators are embracing DevOps best practices as a way to achieve high-velocity delivery of applications and services while optimizing the use of their IT assets. It makes sense: By extracting more utilization from resources you already own (or are already leasing in the cloud), you're lowering your TCO and gaining the flexibility you need to adapt to fluctuating workflows and exploding data volumes.Read More
May (the month, not the guitarist from Queen) has always been synonymous with endings and beginnings. May marks the end of autumn in the Southern Hemisphere and the practical start of summer up north. It’s a big month for graduations (another ending), and just as big for weddings. You get the idea.
So with schools about to let out for summer, I thought we should look at our own report card.Read More
In January, I argued that 2018 is shaping up to be the year of the Software-Defined Server. I pointed to a number of reasons why:
- Data is growing rapidly, putting pressure on IT infrastructures that simply aren’t built to keep up.
- To act on all that data quickly, businesses need to analyze it entirely in memory, which is 1,000 times faster than flash storage.
- Today’s on-premise and cloud data centers typically aren’t equipped with servers that can provide a single instance of memory large enough to accommodate many data sets.
It has been over 50 years since Gordon Moore saw that transistor density doubles every two years. Over the decades, the interpretation of “Moore’s Law” has evolved to represent that the performance of microprocessors, and computers in general, is doubling every 18 months.Read More
TidalScale has introduced a new concept in the computing fabric: Software-Defined Servers, which allow users to aggregate off -the-shelf commodity servers together in such a way that they form a virtual machine that spans the hardware servers but looks like a single large server to an operating system. This large virtual server can run a single guest operating system like Linux and can then run application programs on that system.Read More
As the high-performance computing (HPC) community prepares to descend on Denver for SC17 next week, its members will arrive in the Mile-High City with more baggage than the usual rolling carry-on. They’ll also be packing some long-held expectations. One of these is that it’s more or less impossible to create a real HPC system—a massive single system image—in the cloud. I fully anticipate they will leave Denver with the opposite expectation.Read More
I’ll be the first to acknowledge that there’s a lot to the TidalScale story. Our Software-Defined Servers enable organizations to right-size servers on the fly to fit any data set. The process of creating one is fast, flexible and easy. With TidalScale, you can:Read More
Earlier this year, IDC surveyed 301 IT users from medium-sized and large enterprises, asking them questions that allowed the research firm to determine the relative efficiency of those data centers. (For reference, the average data center contained 386 blades and servers, while the largest third of those surveyed averaged 711 blades and servers.)Read More
Last week, I explored some of the key issues and core benefits that are prompting enterprises to move to more flexible and cost-effective composable infrastructures. As I pointed out in Part 1 of this blog, composable infrastructure technologies from vendors like TidalScale are designed to address many of the most pressing issues in today’s data centers,Read More
Part 1: The Need for Composable Infrastructure
New approaches to infrastructure design are required for businesses to keep up with the amount of data that is generated, and whose timely analysis is of paramount importance for the business to remain competitive in the digital economy. Newer approaches to infrastructure must focus on efficiency toRead More