The TidalScale Blog

Focus on Possibilities, Not Limits


Computer Science is obsessed with "negative results" and "limits.” We seem to delight in pinpointing a terminus for virtually any technology or architecture – to map the place where the party ends.

Take, for example, Amdahl's Law, which seems to suggest that once you reach a certain point, parallelism doesn't help performance. Amdahl’s law has prevented many from believing that a market exists for bigger single systems since, since the law leads us to conclude larger multicore systems won't solve today's problems any faster.


Beware the Intuitively Obvious

The reasoning behind Amdahl’s Law turns on an assumption that all the parts of the problem must interact in such a way thatthe performance-constraining set of operations is nearly always sequential. It seems intuitively obvious that if programs are sequential and parallel programs are just a way to speed up a sequential program, then at some point parallelism cannot help. But intuitively obvious things can be very wrong.

Enter Gustafson

Contradicting this viewpoint is Gustafson’s Law, which states that computations involving arbitrarily large data sets in fact can be efficiently parallelized. Gustafson pointed out that Amdahl's Law focused only on problems whose datasets are not expected to grow significantly. Gustafson's Law instead proposes that programmers tend to solve problems within a fixed practical time by setting the size of those problems to the equipment available to them. Therefore, if faster (more parallel) equipment is available, larger problems can be solved in the same time.

Many experts will quote Amdahl's Law as if it applies to every problem; relatively few are aware of Gustafson's Law (though it is well known in narrow systems architecture circles). Yet in the real world where datasets grow larger by the day, actual use cases prove Gufstafson was right: It turns out that most people scale systems up not to make a particular problem run faster, but to handle larger and larger problems of the same sort.

Combining the Best of Both

TidalScale is the first technology to fully implement Gustafson's law in the way it delivers scaled speed-up for computations on larger and larger datasets at low incremental cost. TidalScale’s Software-Defined Servers™ show that when you scale up a problem you can use more and more processors – and, in fact, virtually all the hardware resources in your domain, including memory, interconnects, persistent storage and networks.

A revolutionary new approach to virtualization and scalability, TidalScale’s Software-Defined Servers sit between any operating system and bare metal hardware, enabling users to take advantage of hundreds of commodity servers to accelerate insight.  A TidalScale Software-Defined Server presents all hardware as a single virtual server – and it achieves this without requiring a single change to operating systems or applications. 

This is possible today because of four things:

  1. CPU hardware acceleration for memory management operations: this was put in place by CPU vendors for virtualization (specifically, VT-x and VT-d),
  2. Low network latency: TidalScale uses raw Ethernet frames on a standard 10G Ethernet as its system resource bus,
  3. TidalScale's virtualization of all resource types: Memory pages, Processor State, Network I/O, & Storage I/O, and
  4. Denning's locality principle combined with TidalScale's ability to apply dynamic locality learning.

TidalScale earned its name from our ability to mobilize all resource types, enabling them to flow around the system like the tides. TidalScale is unique in its ability to migrate interrupts and vCPUs to where they need to be delivered.

By combining the hardware cost linearity of scale out with the software development ease of use of scale up, TidalScale delivers a radically improved price/performance ratio that enables a range of new use cases. No longer must data scientists, financial analysts and data management specialists constrain the size of the problem at hand to fit their available hardware, lose precious time by rewriting code to enable software to run across clusters, or invest in new hardware.

Now they can economically size their systems to fit the size of their computing problems because TidalScale servers are:

  • Flexible. You can scale the system to the size of the problem use standard server hardware, and grow the system as your needs expand.
  • Fast. You can achieve in-memory performance at large scale with 5TB, 10TB, 50TB of memory or more, and using dozens to hundreds of CPU cores.
  • Easy. TidalScale requires zero changes to the OS or applications with transparent optimization via machine learning. It just works.

Perhaps it’s time to focus less on limits and more on possibilities.

Topics: TidalScale, Amdahl, Gustafson, Multiprocessor, Large memory

Sign up for weekly updates!

Recent Posts:

gartner_cool_vendor_2017
IDC-Innovator-Logo-2017
e-week-promo
red-herring-nm-winner
cube-interview-graphic