The TidalScale Blog

    A Big Data Challenge, Finally Solved

    Posted by Gary Smerdon on Jan 5, 2017 7:03:00 PM
    Picture of Gary Smerdon
    Find me on:

    When I talk with people about the rapidly increasing volumes of data they rely on to run their business, I describe data growth in terms of the cost of sending a kid to college. Today, tuition and fees at an out-of-state public school average nearly $25,000 a year. If those costs grow at the rate that data is growing – at 62 percent CAGR – then by the time your new born daughter heads to college, her freshman year alone will cost more than $200 million!

    Your data is growing just that quickly, I tell them. And that’s usually the point where whoever is still standing needs to find a seat.

    The visionaries who founded and shaped TidalScale saw this coming. Even before the Internet of Things (IoT) radically escalated the velocity of data available to businesses, TidalScale’s founders – who have invented and launched breakthrough technologies at companies like SAP, Apple and Cisco – recognized that as organizations become more data-driven, their hardware budgets could never grow fast enough to keep pace with the demands of their business. (Try telling your CEO that your data center needs to expand by nearly two-thirds every single year.)

    Yet they also saw first-hand the limits of traditional scale-up and scale-out solutions to address in-memory computation, which much of big data analytics requires. They saw how analytics teams lost crucial time to insight when they had to rewrite applications to run across clusters, and how others delayed meaningful results by constraining the size of their problem to match the limits of their computing resources. They watched as data scientists fell over the Memory Cliff – the point where a problem’s thirst for memory exceeds the ability of a single system to quench it. They understood that as data growth accelerates, these problems will only grow more crippling.

    Fortunately, they assembled a team that imagined a different future for computer scalability. They envisioned a revolutionary way to pool multiple commodity systems into a single virtual machine that incorporates all the memory, all the CPUs, all the disk storage, and all the other resources of multiple physical servers. Better still, they found a way to achieve this without requiring a single modification to either the OS or the application you’re running.

    This is the Software-Defined Server as envisioned by TidalScale, a revolution in scalability that’s helping organizations of all sizes overcome the challenge of working with big data. TidalScale’s vision is simple: To allow you to scale your system to the size of your problem. To enable you to focus on the problem, not on working around the constraints of your resources. To allow you, perhaps for the first time, to ask: What could I solve with 50TB of memory and hundreds of cores? And then to enable you to find out.

    That’s what TidalScale is about.

    In the coming weeks, check back with us to learn how this vision is transforming data science, financial analytics, simulation, bioinformatics, Business Intelligence, and IT infrastructure. Find out how our customers are using Software-Defined Servers to shatter the barriers that are keeping them from the answers they need, when they need them. And discover, finally, what it means to see rapidly growing data not as an ever-worsening problem, but as an ever-more-exciting opportunity. Take a Test Drive.

    Topics: big data, software-defined server