It has been over 50 years since Gordon Moore saw that transistor density doubles every two years. Over the decades, the interpretation of “Moore’s Law” has evolved to represent that the performance of microprocessors, and computers in general, is doubling every 18 months.Read More
In 1984, Ike Nassi, now an accomplished technologist and entrepreneur, was vice president of research at Encore Computer. He and his colleagues, along with Encore co-founder Gordon Bell, the legendary engineering vice president at Digital Equipment Corp. and originator of Bell’s Law of computer classes, submitted a proposal to DARPA. They hoped the defense-focused research agency would fund the development of a distributed approach to strongly coherent shared memory. The work was founded on the notion that applications are more easily written, and deliver results sooner, when the data is entirely resident in memory.Read More
Today, Hewlett Packard Enterprise (HPE) unveiled a prototype of a massive server designed around memory – 160TB of it in fact. In announcing this concept system, which is the latest project in HPE’s research effort known as The Machine, HPE chief Meg Whitman reasoned, “We need a computer built for the Big Data era.”Read More
I’ve grown accustomed to the stares of disbelief. It usually starts like the conversation I had the other day with some folks from a leading North American insurance company. They were planning to roll out an advanced new analytic model. Trouble was, they had no way to predict how much compute or memory capacity they’d need.Read More
While it’s true that we can never predict tomorrow’s weather with 100 percent reliability (at least not yet), at the same time it’s true that we can predict yesterday’s weather with 100 percent certainty.
What does this have to do with anything?
Well, it turns out that meteorologists aren’t the only people who use historical data in an attempt to predict reasonable futures.Read More
Last week, I looked at some of the compelling reasons for transforming a set of commodity servers into a big flexible computer, or BFC. At TidalScale, we call this a Software-Defined Server -- a single virtual machine operating across multiple nodes, and that makes all the aggregated resources available to the application. But for today’s blog, it’s BFC all the way.Read More
If you’re familiar at all with TidalScale, then you know we believe people should fit the computer to the problem, rather than the other way around. We believe in new technologies that can be adopted easily, in leveraging advances in cost-effective hardware, and in automation. We believe you shouldn’t have to invest in new hardware to solve large or difficult computational problems. We believe commodity, industry-standard technologies hold remarkable power and possibilities that are just waiting to be tapped.Read More
In Gary' Smerdons last post, he listed eight ways Software-Defined Servers can help reduce OpEx and CapEx, while helping data center managers extract maximum use and value from existing IT resources.
As vital as these benefits are to IT, operations, finance and other areas, the ability to scale your system to the size of your problem is just as beneficial to scientists and analysts – the people on the front lines of big data analytics.If you fall into that camp, then you’re probably familiar with the dreaded “memory cliff.”Read More