The Ripple Effect

Why Two Tech Legends Changed Their Minds About The Future of Computers

In 1984, Ike Nassi, now an accomplished technologist and entrepreneur, was vice president of research at Encore Computer.  He and his colleagues, along with Encore co-founder Gordon Bell, the legendary engineering vice president at Digital Equipment Corp. and originator of Bell’s Law of computer classes, submitted a proposal to DARPA. They hoped the defense-focused research agency would fund the development of a distributed approach to strongly coherent shared memory.  The work was founded on the notion that applications are more easily written, and deliver results sooner, when the data is entirely resident in memory.

DARPA saw the value in the research and provided more than $20 million to develop an advanced hierarchical version of the Encore Multimax in hardware. Ike and his team built and demonstrated the new system to DARPA some five years later.

Gordon and Ike continued to discuss what it would mean to create a system that makes a large, strongly coherent memory space available to applications with large datasets. They saw others try to solve the problem by scaling out, adding layers of complexity and requiring users to extensively reprogram applications using distributed processing technologies like MPI. They were designed to somehow give big problems access to distributed computers large enough to handle them.

As Chief Scientist at SAP when HANA was created, Ike witnessed again and again how organizations had a tremendous need for creating big memory computers. But he also saw the challenges associated with meeting that need. It was a constantly recurring theme for him: as computing problems increased, the need for large memory computing became increasingly obvious, and yet the solutions available to solve this were too complex, too expensive, or both.

In 2012, Ike approached Gordon, his longtime friend and mentor, with a new idea. Unlike the advanced Encore work, which required new hardware, Ike claimed the goal of creating a large shared memory system could be achieved entirely in software. The software, he explained, would aggregate industry-standard components of x86 processors and Ethernet – resources that already exist in any datacenter – and present them to users and applications as a single, combined system. Best of all, creating this Software-Defined Server would require no changes whatsoever to applications or operating systems.

The Software-Defined Server was a simple solution to what had historically been a persistently complex and costly problem. Gordon loved the idea. So, Ike founded TidalScale, and Gordon joined a small group of other visionaries who became the company’s first investors.

It’s a great story, and in a new cover feature of the latest issue of IEEE Computer, Ike and Gordon tell it in fascinating detail. And once again, this remarkable pair has co-authored something that’s bound to change how businesses and organizations solve their biggest problems.

Download the article hereWatch Ike give an overview of Software-Defined Servers here.

New Call-to-action

Topics: Large memory, in-memory, software-defined server, in-memory performance, in-memory computing, HANA, Gordon Bell, IEEE Computer

TidalScale featured in e-week!
Gary Smeardon in the Cube Interview