The Ripple Effect

TidalScale's Year in the News

TidalScale, software-defined server, server, in-memory performance, infrastructure, composable infrastructure

May (the month, not the guitarist from Queen) has always been synonymous with endings and beginnings. May marks the end of autumn in the Southern Hemisphere and the practical start of summer up north. It’s a big month for graduations (another ending), and just as big for weddings. You get the idea.

So with schools about to let out for summer, I thought we should look at our own report card.

Read More

The Software-Defined Future is Here

TidalScale, software-defined server, software-defined data center, composable infrastructure, right-sizing, Memory, Gordon Bell, IEEE Computer

In January, I argued that 2018 is shaping up to be the year of the Software-Defined Server. I pointed to a number of reasons why:

  • Data is growing rapidly, putting pressure on IT infrastructures that simply aren’t built to keep up.
  • To act on all that data quickly, businesses need to analyze it entirely in memory, which is 1,000 times faster than flash storage.
  • Today’s on-premise and cloud data centers typically aren’t equipped with servers that can provide a single instance of memory large enough to accommodate many data sets.
Read More

Containers and Software-Defined Servers: A Win-Win

TidalScale, linux containers, in-memory, software-defined server, composable infrastructure, devops

Background

TidalScale has introduced a new concept in the computing fabric: Software-Defined Servers, which allow users to aggregate off -the-shelf commodity servers together in such a way that they form a virtual machine that spans the hardware servers but looks like a single large server to an operating system.  This large virtual server can run a single guest operating system like Linux and can then run application programs on that system.  

Read More

Deploy a "Supercomputer" in Minutes, Pay by the Hour

TidalScale, software-defined server, software-defined data center, TidalScale WaveRunner, composable infrastructure, SC17, hpc, supercomputing

As the high-performance computing (HPC) community prepares to descend on Denver for SC17 next week, its members will arrive in the Mile-High City with more baggage than the usual rolling carry-on. They’ll also be packing some long-held expectations. One of these is that it’s more or less impossible to create a real HPC system—a massive single system image—in the cloud.  I fully anticipate they will leave Denver with the opposite expectation.

Read More

TidalScale Open APIs Improve Data Center Utilization by Orders of Magnitude

TidalScale, software-defined server, infrastructure, software-defined data center, TidalScale WaveRunner, RESTful, API

How the WaveRunner API Enables Tomorrow’s SDDC Innovation, Today

Guest blog post by Chris Busse, CTO at APIvista

In my consulting work, I encourage enterprises of many sizes to use standardized APIs across their business areas. This means I’m often called upon to explain what an application programming interface is to non-technical stakeholders. 

Read More

The Cloud as Rainmaker

TidalScale, software-defined server, Gartner Cool Vendor, Cloud Computing

In case you missed the news the other day, the analysts at Gartner have just named TidalScale a Cool Vendor in its 2017 Cloud Computing Report[i].  This latest report from one of the world’s most respected research outfits affirms once more that the time is right for TidalScale’s Software-Defined Servers, which bring flexibility to modern data centers by enabling organizations to right-size servers on the fly.  (I say “affirms once more” because TidalScale was named an IDC Innovator just a few weeks ago.) 

Read More

Why Wait for HPE’s The Machine?

TidalScale, software-defined server, in-memory performance, infrastructure

Today, Hewlett Packard Enterprise (HPE) unveiled a prototype of a massive server designed around memory – 160TB of it in fact. In announcing this concept system, which is the latest project in HPE’s research effort known as The Machine, HPE chief Meg Whitman reasoned, “We need a computer built for the Big Data era.”

Read More

The Trouble with Hadoop

TidalScale, software-defined server, hadoop

Whenever IT folks talk about handling their big data problems by scaling out with Hadoop, I tend to think about the 1986 comedy, “Big Trouble in Little China.” It chronicles the mishaps that ensue when a trucker gets dragged into a mystical battle in Chinatown. It’s kind of awful, but with John Carpenter in the chair and Kurt Russell on the screen it still delivers some laughs.

Read More

Why You Need a BFC (Part 1)

TidalScale, virtualization, in-memory performance, data center

If you’re familiar at all with TidalScale, then you know we believe people should fit the computer to the problem, rather than the other way around.  We believe in new technologies that can be adopted easily, in leveraging advances in cost-effective hardware, and in automation. We believe you shouldn’t have to invest in new hardware to solve large or difficult computational problems. We believe commodity, industry-standard technologies hold remarkable power and possibilities that are just waiting to be tapped.

Read More

9 Ways to Press the Easy Button for Scalability

Multiprocessor, TidalScale, in-memory, big data, software-defined server

In some recent blogs, we covered eight reasons why Software-Defined Servers can help reduce OpEx and CapEx, while helping data center managers extract maximum use and value from existing IT resources. And last week, we illustrated how you can achieve some startling real-world performance gains by implementing Software-Defined Servers.

Today, let’s look at how simple, straightforward and transparent Software-Defined Servers are. 

Read More

A Market Awakens to the Value of Software-Defined Servers

TidalScale, software-defined server

 

You may have seen last week’s announcement that TidalScale was named an IDC Innovator in a recent report on software-defined solutions in the data center. IDC Innovators: Virtualizing Infrastructure with Software-Defined Compute, 2017 (March 2017) calls out TidalScale for allowing enterprises to “reuse commodity servers currently in service as workload demands arise.” That’s a gloriously concise way to bottom-line the

Read More

300x Performance Gains Without Changing a Line of Code

TidalScale, software-defined server, in-memory performance

In Gary' Smerdons last post, he listed eight ways Software-Defined Servers can help reduce OpEx and CapEx, while helping data center managers extract maximum use and value from existing IT resources.

As vital as these benefits are to IT, operations, finance and other areas, the ability to scale your system to the size of your problem is just as beneficial to scientists and analysts – the people on the front lines of big data analytics.If you fall into that camp, then you’re probably familiar with the dreaded “memory cliff.”

Read More

For Cloud Infrastructure Providers, a Way to Do More than Ever Before

TidalScale, software-defined server, infrastructure, cloud

Guest Blog from Sheng Yeo, CEO of OrionVM, a partner of TidalScale

Cloud infrastructure providers today don’t have much flexibility when it comes to the systems they use. Resources devoted to running specific applications and workloads are generally confined to the limits of a single system, typically a “sweet spot” server that perhaps offers 24 cores and a few hundred gigs of memory. It's a matter of economics, really: 

Read More

Open Compute Rack & the Software-Defined Server

TidalScale, software-defined server, OCP

Let’s take a trip back in time. It’s 2009, and Facebook has just become the No. 1 social network in the United States.  In January of that year, Facebook reports it has 150 million users worldwide. Only eight months later, membership doubles to 300 million.

Read More

8 More Benefits of Software-Defined Servers

TidalScale, software-defined server

In my last blog, I touched on some of the benefits that make Software-Defined Servers the crucial missing piece of the software-defined data center.  Now it’s time to look closer at why Software-Defined Servers are so beneficial, and to whom.

Read More

The One Thing Every Data Center Needs (But Doesn't Have)

TidalScale, software-defined server

Most data center managers – and even many end users – are familiar with Software-Defined Networking and Software-Defined Storage. These battle-tested approaches to virtualizing existing assets make it easier for resources to zig when workloads zag. They introduce significant flexibility into the data center, which is a win for practically everyone involved.

But one piece has been conspicuously missing from the software-defined puzzle: the server.

Read More

3 Ways to Amplify Container Performance

TidalScale, linux containers, software-defined server

A recent survey of 310 IT professionals found that container production operations have nearly doubled in the past year. Container technology is popular because it provides efficient utilization of isolated resources without all the overhead of traditional virtualization.

Read More

System Abstractions - Déjà Vu all over again...

abstraction, Large memory, TidalScale, virtualization, virtual memory

I’ve known Ike Nassi since we both worked at Digital Equipment back in the good old days, and I’ve always enjoyed talking to Ike about computer architecture. In some ways, what Ike’s doing at TidalScale seems very déjà vu with what DEC did in that timeframe when it introduced its first mini-computer with real virtual memory – the VAX-11/780. Virtual memory is an abstraction – let’s pretend we have a lot of physical memory even though we don’t; TidalScale is an abstraction – let’s pretend we have a big, powerful computer, even though we don’t.In both cases the idea might seem highly questionable to someone whose job is to squeeze the last Iota of performance out of a computer.

Read More

Focus on Possibilities, Not Limits

Large memory, Amdahl, Gustafson, Multiprocessor, TidalScale

Computer Science is obsessed with "negative results" and "limits.” We seem to delight in pinpointing a terminus for virtually any technology or architecture – to map the place where the party ends.

Take, for example, Amdahl's Law, which seems to suggest that once you reach a certain point, parallelism doesn't help performance. Amdahl’s law has prevented many from believing that a market exists for bigger single systems since, since the law leads us to conclude larger multicore systems won't solve today's problems any faster.


Beware the Intuitively Obvious

The reasoning behind Amdahl’s Law turns on an assumption that all the parts of the problem must interact in such a way that

Read More

Test Drive TidalScale for Yourself

Test Drive, TidalScale

TidalScale has built and operates very large (1.5TB - 3.0TB RAM) TidalPods for customers to test and evaluate running their applications on a single, virtualized very large system. We call these systems PoC systems (i.e. Proof Of Concept). We are making these systems available to you for a limited time, at no charge, to test your application(s) in a large scale environment. The basic testing program:

Read More
gartner_cool_vendor_2017.jpg
IDC-Innovator-Logo-2017.png
TidalScale featured in e-week!
rhglobal100logo_360.png
Gary Smeardon in the Cube Interview