Breakthrough flexibility & performance in third generation of TidalScale software GET THE DETAILS >

The TidalScale Blog

    Containers, DevOps and Software-Defined Servers: The Solution for High-Velocity Service Delivery

    Authored by: Ike Nassi

    Every day, more data center administrators are embracing DevOps best practices as a way to achieve high-velocity delivery of applications and services while optimizing the use of their IT assets. It makes sense: By extracting more utilization from resources you already own (or are already leasing in the cloud), you're lowering your TCO and gaining the flexibility you need to adapt to fluctuating workflows and exploding data volumes.

    Virtualization platforms play a huge role in DevOps enviroments, and among the most popular are containers--deployed as a lightweight alternative to using multiple virtual machines. Containers are useful because they can combine programs and sometimes data into packages that can be more easily allocated to servers. But they must be carefully managed from an operational standpoint.  Containers also introduce a whole host of challenges, and so an entire landscape of container management systems (CMS) has evolved to help with this difficult problem.

    At TidalScale we have learned to use containers extensively in our internal operations. They help us manage dozens of servers. In fact, our entire QA system is completely dependent on containers and, thus, we’ve learned how to use containers effectively through very practical and intensive use. Through this experience, we have discovered tremendous synergies between Containers and Software-Defined Servers that have dramatic positive impacts on overall resource utilization. We'd like to pass along some of these lessons.

    The TidalScale Solution

    To understand how TidalScale improves container management, it's important to first understand what we do. TidalScale introduced a new concept in the computing fabric: Software-Defined Servers, which allow users to aggregate off -the-shelf commodity servers together in such a way that they form a virtual machine that spans the hardware servers but looks like a single large server to an operating system.  This large virtual server can run a single guest operating system like Linux and can then run application programs on that system. Those familiar with virtualization often think of TidalScale's enabling technology as an inverse hypervisor.

    Neither the operating system nor the applications need to be modified.  The guest operating system assumes it has all the resources given to it, including, essentially, all the processors, all the memory, all the networks, and all the storage.[1]

    Challenges with Container Management Systems

    Now that you understand a bit about what we do, let's talk about how Software-Defined Servers can benefit CMS environments.

    There are many challenges associated with container management, but we’re only going to focus on a few.  First, containers are often used for microservices.  These containers tend to be small, but there may be a lot of them.  The containers may need to communicate with related containers, and/or communicate with databases.  Communication among containers typically uses TCP/IP, a full-blown routing protocol, in some cases an unnecessary source of network overhead[2].  The landscape of containers typically run on a landscape of physical servers.  The mapping between these containers and servers[3] can get complicated and expensive to manually manage, so container management systems were invented. A CMS is responsible for conforming to a discrete set of service-level constraints specified by users, monitoring the landscape through telemetry, and using various configuration tools to implement conformance.

    However, if one steps back a bit, this sounds like job an operating system has to do, mapping processes to processors.  It also sounds a lot like what TidalScale has to do, mapping virtual processors to physical processors.  So we have three mappings going on simultaneously, Containers mapping onto servers, processes mapping onto processors, and virtual processors mapping onto physical processors.  And they are all working in isolation, with no information about what the other levels are doing. Can we do better?

    In the case of Tidalscale, in some cases, we might be able to eliminate the container management system, and have Linux manage containers directly, for example using Docker.

    So, simple OS process management plus TidalScale's WaveRunner point-and-click orchestration software  can be a great enhancement, or, in some cases, even an alternative to container management systems because it results in much simpler end-to-end processes[4].

    Solving Mobility Issues

    Mobility is essential to good performance. Here, the hardware resources lying underneath the virtual machine are dynamically and automatically associated with a set of corresponding virtual resources, i.e. virtual processors, guest physical memory, etc.  To achieve optimal performance, these guest resources are mobile, in that they can migrate from physical server to physical server on a demand-driven basis, without human intervention, based on the instantaneous workload requirements.  Mapping adjustments are made using machine-learning techniques.  The idea of mobilization of virtual resources is key to achieving good performance in that it allows the virtualization layer, invisibly to the operating system, to map virtual resources to physical resources in a performant way.  There are other advantages as well.  Virtual resource mobilization can be used to increase the reliability of the system, and the elasticity of the system, resulting in dynamic provisioning, allowing a system to grow or shrink either under manual control, or automatically.

    Solving Container Overload

    The astute reader may have already identified the second issue that comes up: namely, having a large number of containers running on a single system could expose the system, over time, to becoming overloaded. TidalScale addresses this problem as well. TidalScale balances processor utilization and will[5] have the ability to add physical servers to a running system without restarting them. TidalScale will do this either manually or automatically, allowing the software-defined server to adjust to the needs of the computing load as it increases (or in fact decreases). This is analogous to a guest operating system running a new program without having to reboot the operating system.

    Solving Reliability Issues

    But this now leads to a third issue.  As the size of the Software-Defined Server increases, one might think that the system could become inherently less reliable.  However, quite the opposite is true.  Again, because of the mobility of virtual resources, a TidalScale software-defined server collects telemetry data having to do with memory error rates, overheating, and network errors. TidalScale will bring a spare hardware server online and migrate the load off the failing server so it can be shut down for repair or replacement.[6]  In the meantime, leveraging a CMS to manage even just a few Software-Defined Servers can greatly simplify deployments, eliminates networking overhead, and maintains redundancy.

    Today, TidalScale supports much larger containers than can fit on a single hardware server and can use CMS platforms completely without modification. 

    A Better Picture for DevOps

    DevOps administrators take note: TidalScale is very friendly to container technology, but it also has significant simplification, performance, and reliability advantages over some container management scenarios.

    Learn more about TidalScale technology in my introductory video below, or read the IEEE white paper that Gordon Bell and I published recently.

    New Call-to-action

    [1] The number of resources of each type is configurable at boot time, and in fact some of the resources can be reserved for internal operations.

    [2] Internally, TidalScale does not use TCP/IP.  Instead it uses a highly optimized proprietary protocol on commodity Ethernet that is invisible to the operating system and all applications.

    [3] This is called a bin-packing problem.

    [4] Read our DevOps Jenkins-to-WaveRunner testing process white paper as an example of this.

    [5] We have this in a working prototype, and expect it to appear in a future release.

    [6] Like most cars, most failures are not instantaneous, and there is more than sufficient time to take corrective action.

    Topics: TidalScale, linux containers, in-memory, software-defined server, composable infrastructure, devops