I’ve known Ike Nassi since we both worked at Digital Equipment back in the good old days, and I’ve always enjoyed talking to Ike about computer architecture. In some ways, what Ike’s doing at TidalScale seems very déjà vu with what DEC did in that timeframe when it introduced its first mini-computer with real virtual memory – the VAX-11/780. Virtual memory is an abstraction – let’s pretend we have a lot of physical memory even though we don’t; TidalScale is an abstraction – let’s pretend we have a big, powerful computer, even though we don’t.In both cases the idea might seem highly questionable to someone whose job is to squeeze the last Iota of performance out of a computer. The 780 was cheaper than a mainframe but not inexpensive (~$200,000 at the time) so the buyer wanted to get the most out of it possible. The system was powerful for the day but puny by today’s standards (~1/1000 the power of the computer in your smartphone). So the idea of adding system software that pretended that a hard disk was RAM memory (virtual memory) probably seemed to many hardcore programmers like the worst kind of computer science hand waving. I’m guessing that TidalScale sometimes gets the same reaction for the same reason – how can you pretend that a bunch of servers, network connected, are a big server? More recently the hypervisor posed a similar challenge: how can you use software to pretend that the CPU is different?
In both cases (DEC VAX and TidalScale) the core motivation is system cost-efficacy. With the 780, disk storage was a lot cheaper than DRAM so being able to substitute disk for RAM allowed DEC to cost-optimize the system. In the case of TidalScale, being able to substitute server systems designed at the current technology sweet spot of the for the biggest/baddest high end server potentially saves the customer a lot of money by exploiting cheaper processors (with more bang for the buck) and less expensive DRAM as well (in smaller, more cost-effective configurations).
It’s not surprising to me that some programmers would poo-poo the idea. Programmers are difficult customers generally – they all know what they want (and each wants something different) and many believe they could write something better than the available products if they could just spare a week (which they can’t). Abstractions like virtual memory and software-defined servers are frustrating to them for that reason, but in the larger context, adding this kind of abstraction and focusing programming attention the part of the software than adds real value is a proven good idea.
In the case of virtual memory, the abstraction and system solution worked very well for the clear majority of applications because it turns out they had a “working set” just like the pointy-headed computer scientists had predicted. During any point in time the application was just using a small subset of the entire memory space, and as long as that subset is smaller than the real physical memory, virtual memory works just fine. Similarly, for most programs the overhead of a hypervisor is small and an excellent tradeoff given the benefits like flexible deployment that virtualization provides.
Ike’s betting that the same thing applies to many applications at the system level – that they have actual system working sets – and that TidalScale’s ongoing optimization process (moving programs and memory into specific servers based on how the application operates) will provide the same efficacy at the server level that virtual memory had for an application. The abstraction provided by TidalScale – the appearance of a much larger server – is probably even more valuable than the abstraction provided by virtual memory if it enables an application owner to continue a “scale-up” strategy (more memory) rather than having to rewrite from scratch for a grid, scale-out architecture.
Initial results seem promising to me. It wouldn’t be the first time that abstraction carried the day.
Peter Christy is a Research Director at 451 Research and has covered TidalScale for 451 clients. Earlier Peter was a founder and VP of Software for MasPar, a mid-range SIMD supercomputer, so high-performance computing is a familiar area.