<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

In-depth: Nvidia GRID 2.0, virtual machines, and the future of editing and post

7 minute read

NvidiaLiving on the GRID (2.0)

Rakesh Malik delves into the history of virtual machines and how new solutions, such as Nvidia GRID 2.0, could impact workflows and post houses.

Nvidia recently announced GRID 2.0, a new take on an idea that's been pretty well established in supercomputing for decades. To understand what GRID 2.0 does and what it offers, we first need to understand what virtualization is.

Virtual Machines

Java made the idea of the virtual machine famous. The Java compiler compiled Java code into bytecodes, a form of machine language made for the Java virtual machine, or JVM. The JVM would load the bytecodes and compile them into machine language specific to the computer it was running on. Being a software implementation, it had a lot of overhead, so for many years Java performance wasn't in the ballpark of native software performance. Although it was enough for web and database applications that didn't require heavy duty computing, it couldn't take advantage of hardware like GPUs. Later, Microsoft's .NET platform expanded on this idea by compiling the intermediate language down to machine code. The platform also enabled developers to implement optimized libraries compiled directly to machine code, using a trusted code model to help prevent malicious code.

Nearly a decade ago, Intel implemented virtualization in hardware, taking a page out of the book of other high-end server platforms like HP's PA-RISC and IBM's POWER, which added buffering, switching and other services to the processor, essentially providing a virtual copy of the processor itself to an operating system. The virtualization transparently time sliced entire operating systems, making it possible for an ordinary personal computer to run multiple operating systems at the same time. Intel demonstrated this technology by setting up a system running a Windows instance and a Linux instance at the same time. The Linux user was able to reboot the computer without affecting the Windows user.

It's important to understand the difference between virtualization and emulation. Emulation enables software writing for one computing platform to run on another, for example running Windows x86 applications on a PowerPC based Mac. Virtualization doesn't do this. It allows multiple complete operating system instances to run on the same computer, but they still have to be compiled for the platform, as is the case with OSX and Windows 7/8/10 today. Virtualization allows instances of both operating systems to run concurrently, each being allocated computing resources as needed.

Virtual Supercomputing

Back in the day when IBM was a powerhouse in supercomputing, it developed a technology called Scalable Power. This was a virtual machine technology, but applied at a larger scale. The hardware side consisted of a set of rack-mount Power workstations, with a proprietary switched interconnect organized in a hierarchy. A node could send a message to another node on the same rack with one network hop, to other nodes in its group with two network hops and to nodes in another group with three network hops.

This enabled the computing platform to scale to pretty much as many processors as one was willing to pay for and, since there was a maximum of three hops to send messages between nodes, it was possible to send data from one node to another quickly and with a fixed and known maximum wait time for delivering messages from one process to another.

When deploying an application, a developer could select a set of nodes via an admin console. Pick a set of 32 nodes, deploy your application and it would run as if it were on a 32-processor computing cluster. The SP2 cluster could be hosting several such applications, each one running as if it had an entire cluster to itself.

Shared Desktop Infrastructure

Nvidia's new twist on these two concepts is to implement this at the GPU level. Nvidia has added hardware support for the same sort of virtual machine that Intel demonstrated at IDF, in addition to partitioning similar to what IBM offered with Scalable Power.

The hardware side of GRID 2.0 consists of boards that sport two or four Nvidia GPUs that IT staff can deploy in a datacenter. The Nvidia GPUs have hardware virtualization support, so the GPU can be partitioned into several 'sub-GPUs' for applications to work with. How much of the GPU an application uses is configurable. For standard users, such as software developers whose tasks involve writing, compiling and debugging applications, the administrator can allocate them a small slice of the GPU, since such applications don't generally use the GPU heavily. At the same time, users that are doing more GPU intensive tasks like CAD can get larger slices of the GPU. It's also possible to allocate an entire GPU for a particularly compute or GPU heavy task, offering a wide range of flexibility.

Since GPUs have a large number of processors that are geared toward computing tasks, Nvidia's GRID 2.0 enables an extremely high compute density. Where IBM's SP2 required several racks in order to support a thousand or more processors, with a 4-GPU GRID board, it's now possible with a just a few PCIExpress cards. On top of higher compute density, interconnects are far faster now, so the processors can send each other messages and get data to and from main memory more quickly as well.

BYOD

As mobile computing grows, the 'Bring Your Own Device' situation is becoming more and more common. One benefit of using a technology like GRID 2.0 is that it allows users access to their applications via their personal devices, while keeping the data and application on the corporate intranet. This makes securing corporate data easier for IT staff, but it has another major benefit, which is to allow remote users to work with large data sets without needing to spend the time transferring the vast amount of data across the network.

With GRID 2.0, it's possible to run a GPU-compute application on a very high-end, shared GPU while interacting with it using pretty much any network connected device. The only data transferred over the intervening network is the rendered pixel data, or in other words, the user interface.


How can this benefit film production?

With 4K capture becoming mainstream and camera companies now pushing toward even higher resolution capture formats, post-production software requires ever more computing power to provide an interactive experience for the user. High-end color grading suites already support remote operation, but these are built on proprietary solutions and require that the user be working with an instance of the software on their own device. With a technology such as GRID 2.0, it will be possible for a post house to deploy seats of its post software of choice and any professional can access the software by running on the post house's server. The post house manages the license and, even when the colorist is working, the software is still running on the post house's server. Since the application is running on the server in the post house, there's also no need to transfer the footage anywhere.

Is this really a step forward?

At first, GRID 2.0 sounds like a throwback to the days of the X-terminal, where a user would be sitting at a terminal with very little computing power connected to a mainframe or minicomputer. Back then, powerful computers were very expensive, so centralizing them was the only option. Working on an Xterm, a user had a virtual interface to a computer that was doing all of the real work. Xterms back then had relatively low fidelity, primarily displaying ASCII data, and the primary means for interacting with the shared computer was a text based interface like UNIX or VMS, not particularly suitable for non-technical folks. X-Windows operated by sending events from the user to the mainframe and back, then the Xterm would interpret the event and update the user interface accordingly. The host computer didn't have any real influence over how the user interface appeared. If you were working on a monochrome Xterm, your UI was monochromatic.

Over the years as personal computing became both affordable and powerful, personal workstations started becoming standard. A post house would set up a network attached storage system and a set of powerful workstations, one for each user to work on. This approach requires that each user has their own computer and, to work with the computer, the user has to be on site. Each computer requires a software license and, when a new employee comes on board, the IT staff has to acquire and configure a whole new computer for them.

Being able to use a shared GPU system like GRID 2.0 eliminates the need to buy and configure a high-end computer with a suite of applications for the new employee, since the software will be running on the post house's server. To improve performance, the IT staff need only upgrade the main server, rather than every workstation in the office. New employees can even bring their own computers, which doesn't even necessarily need to run the same operating system, let alone the application.

Generally, when I start working at a new company, I have to first get my new computer imaged with a standardized setup of the new operating system, then spend the time to download, install and configure the applications required to do my job. It's taken anywhere from a day to a week and it's time that's largely unproductive. If the shop is set up with a properly virtualized infrastructure, setting up a new employee so that they can get started learning the systems and software becomes quite a bit simpler. The new employee doesn't need to install the software; he-or-she just logs in to a new account and launches the software that's already installed and licensed on the server. This is even more advantageous for companies that hire freelancers, since it would enable a freelancer to come on board and get started nearly immediately, rather than taking the time to install software and copy footage and other assets.

Several color grading applications support remote grading, where a colorist can launch an instance of the color suite on their own workstation and remotely control an instance running on their client's server, so that they can work on the color grade without sending hard drives around or transferring huge amounts of footage over the internet. This is particularly useful since the push for 4K and even higher resolutions is leading to ever larger data rates and the internet is falling farther and farther behind the bandwidth demand.

This approach to sharing computing power has enabled services like Amazon Web Services and Microsoft's Azure to offer virtual servers at low prices. The costs of acquiring the hardware, installing and maintaining the network, power and air conditioning, as well as administering the machines, is shared over a large array of customers. AWS and Azure are based on standard processors and, as such, are limited in use to server side applications, typically web services that provide the back end functionality to enable web and mobile apps.

Nvidia is working toward enabling this same shared computing model to extend to heavily graphical, as well as compute, intensive applications. While its white paper refers to 3D and CAD oriented software, it seems reasonable to expect it to work similarly with color grading and compositing software as well, possibly even enabling editors to work directly with original footage rather than offline proxies, even while working on devices like iPads and Surface Pros.

Full Circle

While Nvidia's GRID 2.0 is, in fact, reviving older computing models from the mainframe and Xterm days, it does offer some new functionality beyond what Scalable Power and other shared memory supercomputers offered. One notable improvement is that the programming models for GPUs are based on newer and more sophisticated languages and compilers, so it's easier for developers to write applications that share data between many running threads. They require less space, less power to operate and cool, as well as fewer network resources, making installations easier on small companies.

It's an evolutionary rather than revolutionary step forward, but a big step nonetheless.

For another take on a possible move back to 'terminal'-style workflows, check out our Technical Editor's exploration of the topic: 'Why are we still using workstations?'

Tags: Technology

Comments