Breakthrough flexibility & performance in third generation of TidalScale software GET THE DETAILS >

The TidalScale Blog

    3 Ways to Amplify Container Performance

    A recent survey of 310 IT professionals found that container production operations have nearly doubled in the past year. Container technology is popular because it provides efficient utilization of isolated resources without all the overhead of traditional virtualization.

    One of the key innovations container technology claims over traditional virtualization is that the isolation and transportability of a full virtual machine can be accomplished much more efficiently. Even as containers package up the user space libraries and code, they still share a common kernel. This enables huge savings in server resources as the memory management, filesystem, networking and other core pieces of a machine can easily be shared rather than duplicated as in traditional virtualization. This efficiency makes a containerized microservices architecture possible in situations where the overhead of traditional virtualization would be prohibitive. 

    Containers on TidalScale take these advantages to a whole new level. TidalScale’s Software-Defined Server technology amplifies the economic advantages of containers by allowing them to float more efficiently across physical machines. And by removing the size limitations that exist with containers on hardware today, TidalScale enables more data-intensive applications to be containerized.

    TidalScale’s HyperKernel software combines with common datacenter designs to amplify the benefits of containers in three key ways. With TidalScale, you can:

    • Migrate resources together to achieve the best performance. For instance, TidalScale can migrate an entire processor quickly and seamlessly from one server to the next. This ability to move and colocate resources is a fundamental requirement in a high-performance shared design. Using virtualization features of the hardware below the kernel, the TidalScale HyperKernel optimizes this transparently and automatically.
    • Make more efficient use of a rack’s network bandwidth. It is common for a datacenter rack to be constructed with a top of rack (TOR) switch that bridges the nodes in a rack into the rest of the datacenter. The communication between nodes within a rack is much faster than for nodes in different racks. TidalScale leverages this by using a low-latency protocol across the physical Ethernet link rather than TCP/IP. Network communication between containers on a Software-Defined Server is more efficient because the TidalScale system subsumes the network traffic. Communication within a single node happens entirely in memory, while communication across nodes utilizes the low-latency protocol.
    • Dramatically simplify storage challenges by addressing them at the rack level. The ability to pool all of the resources in a rack while maintaining locality is particularly valuable in data-intensive containers. With TidalScale, all of the I/O resources, both networking and local disks, are pooled within a single virtual machine. While it can be difficult to satisfy the storage needs of a specific container with the storage available in a single server, it is not difficult when the local file system includes all of the drives in all of the nodes of a TidalScale system. This means a single guest OS can leverage storage at the rack level with a potentially huge buffer pool (and where data replication only needs to be done between racks). 

    Containers on TidalScale effectively flatten the view of the datacenter to the natural physical locality of the compute, memory, storage and networking resources, thus improving the performance and efficiency of containers while greatly simplifying their administration. Quite simply, running containers on TidalScale enables the rack, rather than a server, to become the fundamental computing unit in the datacenter. And that leads to significant performance gains.

    Topics: TidalScale, linux containers, software-defined server