The Ripple Effect

Why Wait for HPE’s The Machine?

TidalScale, software-defined server, in-memory performance, infrastructure

Today, Hewlett Packard Enterprise (HPE) unveiled a prototype of a massive server designed around memory – 160TB of it in fact. In announcing this concept system, which is the latest project in HPE’s research effort known as The Machine, HPE chief Meg Whitman reasoned, “We need a computer built for the Big Data era.”

Read More

3 Secrets to Right-Sizing a Server

software-defined server, in-memory performance

I’ve grown accustomed to the stares of disbelief. It usually starts like the conversation I had the other day with some folks from a leading North American insurance company. They were planning to roll out an advanced new analytic model. Trouble was, they had no way to predict how much compute or memory capacity they’d need.

Read More

Predicting Yesterday’s Weather

Large memory, software-defined server, in-memory performance

While it’s true that we can never predict tomorrow’s weather with 100 percent reliability (at least not yet), at the same time it’s true that we can predict yesterday’s weather with 100 percent certainty.

What does this have to do with anything?

Well, it turns out that meteorologists aren’t the only people who use historical data in an attempt to predict reasonable futures. 

Read More

Why You Need a BFC (Part 2)

big data, software-defined server, in-memory performance, infrastructure

Last week, I looked at some of the compelling reasons for transforming a set of commodity servers into a big flexible computer, or BFC.  At TidalScale, we call this a Software-Defined Server -- a single virtual machine operating across multiple nodes, and that makes all the aggregated resources available to the application. But for today’s blog, it’s BFC all the way.

Read More

Why You Need a BFC (Part 1)

TidalScale, virtualization, in-memory performance, data center

If you’re familiar at all with TidalScale, then you know we believe people should fit the computer to the problem, rather than the other way around.  We believe in new technologies that can be adopted easily, in leveraging advances in cost-effective hardware, and in automation. We believe you shouldn’t have to invest in new hardware to solve large or difficult computational problems. We believe commodity, industry-standard technologies hold remarkable power and possibilities that are just waiting to be tapped.

Read More

300x Performance Gains Without Changing a Line of Code

TidalScale, software-defined server, in-memory performance

In Gary' Smerdons last post, he listed eight ways Software-Defined Servers can help reduce OpEx and CapEx, while helping data center managers extract maximum use and value from existing IT resources.

As vital as these benefits are to IT, operations, finance and other areas, the ability to scale your system to the size of your problem is just as beneficial to scientists and analysts – the people on the front lines of big data analytics.If you fall into that camp, then you’re probably familiar with the dreaded “memory cliff.”

Read More
gartner_cool_vendor_2017.jpg
IDC-Innovator-Logo-2017.png
TidalScale featured in e-week!
TidalScale Red Herring 100 Winner
Gary Smeardon in the Cube Interview