RedShark News - Video technology news and analysis

After Effects and the state of GPU computing

Written by Phil Rhodes | Jan 17, 2013 10:00:00 AM
The State of GPU Computing

What exactly is GPU computing and what does and doesn't use it?

As we saw back in December in Eyeon's informative video, GPU computing is a very powerful technique that, during the last year or two, has begun to break out of a niche. The original application of graphics cards was, obviously, video games. 3D rendering software such as Max and Lightwave have been using games-oriented graphics hardware to produce approximate previews of the scene for some time. More recently, the world's most popular operating system learned how to draw its user interface using more features on the graphics card, saving the CPU from spending its valuable time working out which window is on top.

What's new

What's new is the application of graphics processing units to calculations which are not, at least directly, graphics-related. Projects such as Folding@Home have used GPUs for simulation in medical research, and in the last couple of versions, some postproduction software has begun to apply the same technology to rendering effects. Even video games have followed the curve, and now commonly do physics simulation for both rigid objects and soft bodies, smoke, and liquid.

While this is all good, but it could be better.

To understand why, it's probably worth recapping how modern GPUs work and what they're therefore capable of doing.

Doing a large number of things at once

The fundamental principle at work is parallel computing, the concept of doing a large number of things at once. Computing has traditionally been focussed on doing one – or sometimes four or eight – complicated tasks at once, as quickly as possible. GPUs take the opposite approach, simplifying each processing core to the point where a single graphics card can have literally hundreds of them, and even if a GPU isn't generally clocked as quickly as a CPU, the performance advantage can be huge. The limitation is generally that all of these processing cores must work on the same data, which frequently means chunks of an image, although as we've seen with Folding, other things can be done too. Tasks which lend themselves to this sort of repetitive, parallel approach are extremely common in image processing and compositing and, notwithstanding ARM's apparent dedication to more cores in their CPUs, it's becoming increasingly obvious when a particular application is making a CPU do work that's better done on a more parallel architecture.


After Effects

The source of my own dissatisfaction in this regard is Adobe After Effects. There is, for many tasks, no real replacement for AE, and I have a regard for it borne of long familiarity. The problem is that AE is, perhaps necessarily and perhaps unnecessarily, slow. You can't throw enough hardware at AE. It eats memory for breakfast and nibbles on eight-core Xeons as if they were silicon-flavoured after-eight mints, at least if you're doing anything even slightly complicated. And After Effects, unlike sister application Premiere, unlike Lightworks with its GPU-based filters that can be written by the user, and unlike the very desktop environments in which it runs, does not do very much by way of GPU computing.

Now, I don't want to offer unqualified criticism of After Effects. The recent CS6 version does include a GPU-oriented raytracer which has enormously improved performance over previous incarnations of similar things. It is a very modular and configurable application which beckons almost limitless creativity, but it's probably that modularity that makes it slow – data must go through a long pipeline in many compositions, and as far as I know, all of that routing work, even if a particular plugin wishes to reference the graphics hardware, is done on the CPU. Even AE's biggest fans would probably admit that it is, to use the polite terminology of the field, not the world's most performant application.

Computational efficency

Ultimately, no matter what the software, poor performance restricts creativity. I know it's true from my own experience, and I've had it confirmed by colleagues, that more computer performance does not frequently allow AE operators to go home early. It simply extends amount of finesse you can apply until the performance of the software becomes too poor to continue. If GPU computing could make the same difference to AE that it made to Premiere, I would like to add my voice to the cacophony of people requesting it. I don't want to single out AE unfairly here; a lot of 3D applications still do all their rendering on the CPU, with recent attempts at GPU integration focussed on making renders better as opposed to faster.

I've often expressed frustration over the fact that the saleability of software is predicated heavily on feature set and spec list, with performance a secondary priority, but this feeling is stronger than ever when it comes to GPU computing. Hopefully new features will continue to be written to take advantage of it, but I can't help feeling – perhaps a little hopefully – that this particular wave of change has yet to break.