The Ripple Effect

How Creating Visual Effects is Like DevOps (and Where Software-Defined Servers Can Help)


Before TidalScale, I spent years working with animation software to create visual effects for major motion pictures. As part of that process, I learned something that DevOps and IT administrators have since realized: delivering on difficult schedules often means making maximum and efficient use of hardware resources, and servers in particular.

But DevOps? Read on to see what I mean.

Animation and visual effects (VFX) are enormously resource- and time-intensive. It takes a long time to render, or create, the various elements that come together in an animation or VFX. Rendering projects grind away on render farm servers over hours, so animators use render farm scheduling software to keep the farm running at maximum use if there is enough job pressure to fill the farm. Given enough job pressure (jobs in the backlog to run), anything below 85% would be considered wasteful of compute resources, and above 93% would be good enough. 

A quick primer

Before we proceed further, a few definitions:

  • Job: a unit of work put on a machine to complete an element or frame of a shot. One or more jobs may be required to make a complete frame.
  • Element: A part of a frame, that when joined with other elements create an entire frame
  • Frame: A single image of a motion media (video, movie, etc) made up of one or more elements
  • Shot: A series of frames that are have no cuts
  • Sequence: a series of shots that tell a short part of the movie

And some assumptions:

  • The word "Rendering” is shorthand for many different types of element creation
  • Each element in a frame can be predetermined for resource utilization, memory, CPU, GPU, licenses.
  • Not all elements in a shot have the same computer resource requirements
  • A shot has a consistent set of resources in use, textures, lighting models, character models, etc.
  • Frames do not have to be rendered in order, and can be ordered to make best use of render and machine resources.

As an example of the assumptions and definitions, the ILM video ‘The Visual Effects of "Transformers: Dark of the Moon"’

Screen Shot 2018-04-24 at 3.03.15 PM

shows how multiple elements are used to make up a frame, and how frames are combined to make up a shot.  Those shots are combined to make up a sequence.

With any render scheduling software, there are setup and teardown times involved for each job.  A reasonable goal might be having less than 3% setup time. Chunking renders into a single job attempts to balance between the percentage of useful render time and the resource utilization of the jobs. By combining three 50 minute renders into a single job, we allow for 5 minute setup and teardown time.

We do not want a job to be killed while it is rendering because a resource was starved or limited. For this reason, even though a job may have peak memory that is higher than a standard baseline, the scheduling is done for the peak.  Making this assumption allows for an easier discussion of the parameters involved. Most render farm software I am aware of considers both peak and baseline compute resource requirements when making decisions on job placement.

There are multiple ways to optimize completing a shot on a render farm. For example, a primary goal is to have a single job complete as much of a frame as possible. Doing so reduces the memory startup/teardown overhead, reduces required work from the artist to break up a frame, allows for better computer/job resource alignment.

A closer look at one sample case 

The software I worked on would take estimates from the artist at submission time, and then sample the shot by rendering first, last, and some intermediate frames. This would create a memory curve that would help determine compute resources for the frames in between the sampled frames. Such a curve might look like the following: 

Screen Shot 2018-04-24 at 4.05.28 PM

Frames can take from 50 minutes to 310 minutes to render, and consume between 6 GB and 51 GB of memory.  Some of this memory is used for render resources (textures, models, etc) so it would be a good optimization to join multiple jobs on a machine at the same time so the jobs could share those resources.  If we were to assume 64 GB machines, which allows for 48 GB available to the render jobs, then we could combine up to 6 jobs on a machine at the same time, but we will not be able to run some of the jobs.

Anything above 48GB in size is estimated, since we do not have a single computer capable of rendering the frame.

For each job, there are the possibilities for matching the job to a computer:

  1. The job matches the computer resources close enough.
    We can schedule the job onto the machine and let the job complete. There are minimal wasted resources on the machine that are idle too long.
  2. The job uses fewer resources than the machine has.
    We can attempt to find multiple jobs that can run on the computer that will not interfere with each other (Knapsack_problem). We would prefer the jobs use the same shot resources so that we can take advantage of disk caches on the computer.  Otherwise additional memory must be used for caching for each shot, or the shot resources are inadequately cached. 
  3. The job uses more resources than the computer has.
    The job cannot run on the computer until it is broken up into more elements, reducing the resource load of the job so that it fits on a computer. This can be a lot of work.

When the job doesn't fit the server

Our focus here is the third case, where the job does not fit. 

To make a render job fit onto a machine it doesn’t normally fit, there are multiple options available, mostly revolving around reducing the dimensions of the frame:

  1. Tile the frame.
    Break the frame up into distinct tiles that allow the job to fit. One problem with this method is the memory requirements are not broken up linearly with the tiling. If you break a frame in 2 parts, each job will take more than ½ the memory of the full frame because the model and additional render resources must be available.
  1. Break the frame up into multiple elements
    Breaking the frame into more than 1 element, allowing multiple render passes to create the frame. As with option 1, the memory requirements do not scale linearly with the breaking up into multiple frames.  In addition, the frames must be recombined using a composite job.  All of this requires work by the artist team to get the frame to the proper specifications for the client.
  1. Get a larger machine.
    For obvious reasons, a larger machine would be great but might be cost prohibitive. Enter Software-Defined Servers, which use inverse virtualization technology to aggregate multiple existing commodity servers into one or more servers sized to handle virtually any workload. (And it achieves this with industry-standard hardware and no changes to your OS or application software.) With a large enough machine, the job must fit onto the machine, and this problem reduces to case 2, solving the Knapsack problem. 

Using Software-Defined Servers as part of the render farm

Any virtualization software requires additional setup/teardown time, and reduces the performance of the virtualized machine compared to bare hardware running the same software. Given these facts, we will attempt to run as many of the jobs as possible on bare hardware as possible, and group the jobs that cannot run on bare metal together to get them completed. 

With the sample shown previously, we might want to combine the frames as follows:

  1. 28 jobs – 1695 combined minutes
    Any job under 12 GB, combine 4 jobs onto a machine at the same time, adjusting thread counts per job to keep the thread counter under the number of CPU Threads supported.
    Because these jobs take less than 60 minutes to complete, we probably want to chunk 2 frames into a single job to reduce the startup/teardown percentage.
  2. 57 jobs – 8047 combined minutes
    Jobs that are under 34 GB in size, combine 2 jobs onto a machine at the same time. Interleave these jobs to combine lager jobs with smaller jobs to maximize computer resources. Because these all tend to run longer than 100 minutes, don’t bother chunking these to save startup/teardown time
  3. 19 jobs – 4350 combined minutes
    Any job that is under 46 GB in size, run on its own computer.
  4. 10 jobs – 2930 combined minutes
    Any job larger than 46 GB in size, use TidalScale to create a 3-node computer, allowing for a Virtualized Computer size of 120 GB. Run 2 jobs at a time on a computer 

If Software-Defined Servers are not used, then we have 10 frames that need to be broken up in one of the methods defined above.  In my experience, breaking up frames like that will take several artists a few days to complete. This work would remove them from completing new shots.

Let’s assume that tiling will be the method to break up a frame, and the frames can be broken up into 4 parts.  Because these 4 parts take more memory combined than a single frame, we have traded 10 jobs for 40 jobs.  Since a large part of a job is compiling the models and associated render resources, we could reasonably expect the 40 jobs to take 3x the time it would take the 10 jobs.  So that means we need 8790 combined minutes of compute time, with each job taking 1 computer.

However, if we assume Software-Defined Servers can be used:

  • Assume that the Virtualized Computer used for batch 4 above (> 46 GB in size) imparts a 20% penalty, then the jobs combined time is 3660 minutes. By taking the time to configure two TidalScale pods of 3 nodes, allowing for running 4 jobs at a time, those jobs could be completed in 915 minutes. 
  • Using 5 computers for case 2 reduces the time to 804 minutes
  • Using 6 computers for case 3 reduces the time to 725 minutes
  • Using 1 computer for case 1 reduces the time to 423 minutes. This should be tacked onto the computers used for the two cases above. Instead of allocating a separate computer.

For the cost of 17 reasonably sized computers, and a total of 15 hours, the shot is complete.

Because we know in advance, or have a really good idea, the memory and other computer resources needed to complete each frame, and there is a well-defined API to setting up a Software Defined server, the setup and teardown of the larger virtualized computers can happen by the scheduling software at the appropriate time.

Doing so adds another layer onto the the goals. In addition to reducing the startup/teardown for a job, we also want to reduce the machine configuration startup/teardown for a group of jobs.  We already know the jobs that should be combined.  This goal should be easily met.

Reclaiming days of productive time

The traditional (and limited) approach to rendering can tie up artists for days, just as traditional approaches to development and application delivery can unnecessarily tie up DevOps personnel. A better approach is to efficiently assign workloads to Software-Defined Servers created from servers you already have in house.

Learn more about Software-Defined Servers here

 

Topics: in-memory, software-defined server, devops, VFX, render farm

gartner_cool_vendor_2017.jpg
IDC-Innovator-Logo-2017.png
TidalScale featured in e-week!
TidalScale Red Herring 100 Winner
Gary Smeardon in the Cube Interview