<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

The History of VFX Part IV: Concerning hobbits and other creatures

6 minute read

 

Warner Brothers/RedSharkHistory of VFX Part 4 - Contact

 

By the late 1990s, digital VFX had very much come into their own, promising increasingly spectacular visuals in increasingly spectacular films. Andy Stout takes us up to the present day via Pandora, Middle Earth, the middle of the Atlantic and other locations.

Part 1 is here

Contact (1997) is an oft-overlooked film, one of the most intelligent sci-fi pictures of recent times, and also notable for the first appearance of a certain Weta Digital on the world stage. In fact, it was one of the first major films to pioneer the distributed VFX workflow that has since become the standard working practice throughout the industry; dividing the project between eight separate companies. Up until then, VFX had tended to be wholly undertaken by a single house, but the increasing size of the projects being undertaken on the one hand, plus the ability of digital colour correction to marry any colour slippage between the footage supplied by the different team members, made it the best way to proceed.

Meanwhile, in the Atlantic

In fact, the VFX Oscar winner of that year, James Cameron’s Titanic (1997) has 22 companies listed under special effects in its IMDb entry. The film itself was a bravura display of what you could achieve with a melding of models and digital effects, one sweeping shot taking in the entire 14m long model of the RMS Titanic which is populated by a cast of mocap driven actors with digital water and smoke added in afterwards.

“I watched it because I knew, logically, that this shot was a special effect,” the late film critic Roger Ebert wrote at the time. “They did not rebuild the Titanic to make the movie. I knew, in general, what to look for - what trickery might be involved - and yet I was fooled. The shot looks like the real thing.”

There were plenty of challenges along the way. Motion capture was still a relatively new technology, certainly for the film world, and while many of the more severe problems of occluded markers had been dealt with by the faster processing speeds of 1997’s computers, the data still needed cleaning up before being applied to the digital extras in the film. The VFX team also found that the action needed proper directing – aimless walk cycles looked exactly like aimless walk cycles once they were comped into the finished footage – so a whole host of mini vignettes requiring multiple takes had to be shot.

Then there was the digital water (only a 20m model of the stern designed to break in two was used in any actual real wet stuff). Digital Domain tried writing their own software internally, but got bogged down in the sheer number of challenges, in the end calling in a US Department of Defence contractor on a temporary contract to repurpose its ocean simulation and rendering software. Its mix of accurate lighting model and physically accurate but chaotic wave simulation created an ocean that worked both close-up and in wide shots.

Changing the language

Ebert’s assertion that an effect looked like the real thing was challenged significantly in The Matrix (1999), but deliberately so. VFX encompassed nigh on a fifth of the runtime of the Wachowski Brothers’ dystopian action classic, and it was used in a much more obvious way than ever before, becoming part of the language of the film-maker and even working as a plot device at times for the on-screen action.

Its chief innovation was bullet time, a video adaptation of the time-slice technique that had originally been pioneered in still photography by Tim Macmillan. Essentially a nearly circular rig is set up round the object or scene to be filmed, a battery of cameras is mounted on it, and then fired all at once. Convert that to film and you have an image frozen in time that you can spool backwards and forwards. The video version simply adds in a fraction of a second’s delay between each camera shot to produce slo-mo footage, while the sequence is also post produced to remove the visible rig from the other side of the shot.

By the time we reach Star Wars – Episode One: The Phantom Menace (1999), George Lucas is cramming a shade under 2000 effects shots into his movies and is splitting the work not so much between studios as between VFX supervisors, employing three on the movie. In a complete reverse to the order of things up till then, somewhere in the region of 90% of the film contains VFX shots, with only around 12 minutes untouched. So that’s 12 minutes guaranteed free of Jar Jar Binks then...

(As an aside to all this concentration on VFX, but highly significant too, the Coen Brothers' excellent O Brother Where Art Thou? was released in 2000 and was the first film to go via the digital intermediate process and be fully colour corrected throughout.)

Massive attack

While The Phantom Menace is notable mainly for the sheer scale of its achievements, Peter Jackson’s epic Lord of the Rings trilogy is pretty much notable for scale, achievement, and any other superlative you want to throw at it. Each instalment won the VFX Oscar for that year, and while you could write whole books on the effort and innovation that Weta Digital applied to each of the films (and, indeed, people have) you can probably break the true landmarks down into three things: Massive, Gollum and forced perspective.

The latter is an optical camera trick that dates all the way back to the 1920s and sees characters supposedly standing next to each other actually displaced by several feet. Traditionally locked off, Jackson’s team constructed partially moving sets which were slaved precisely to the camera’s movements to maintain the illusion. All this effort was aided and abetted by other techniques varying from the low-tech (standing on boxes/kneeling down) to the high tech (two different scale Bag Ends comped together).

With Massive, Weta finally got their hands on the cast of thousands that could lend credence to the battle scenes. Essentially it’s a programmable mob: input motion elements, set up a branching decision tree for each agent, set them loose, render them and you’re done. Coupled with a renderer that could handle scenes involving hundreds of thousands of agents and millions of polygons, it gave the armies their scale. Coupled with some fine miniature work in turn, it gave the battles their visceral edge, especially once the fine tuning was complete and they could stop orcs either attacking their fellows en masse or running away completely.

And then there’s Gollum. All the elements on the films evolved as the production turned from months into years, but Gollum probably evolved the most. Weta wrote a lot of code for the creature, a lot of stuff depicting muscles, a lot of stuff depicting skin, and he changed from a NURBS model at the start to a subdivision surface one at the end. Jackson also eventually changed the way he was filmed. In the first two instalments, actor Andy Serkis’ movements were captured separately on a motion capture stage and the character composited in (keyframe animation was applied too). However, it was realised that the performances of the entire cast were so much stronger with him in the scene, that, by the time of Return of the King (2003), they were setting up the mocap lights on set and directing the cast together, Serkis simply wearing his mocap suit and being painted out in post.

Pandora's Box

Six years later, motion capture was now called performance capture and was at the heart of what James Cameron and a team led by Weta again achieved with Avatar (2009). On a frankly enormous capture stage, the actors playing the Na’vi were shot with an additional miniature camera on a helmet-mounted boom capturing close-up data of their facial expressions for added realism. With the majority of the Pandora-set scenes filmed against blue-screen, low-res proxies of the backgrounds were also generated so that actors and crew alike could get realtime feedback on what the finished shot might look like.

ILM was called in late on the day to help Weta out (as were a few other companies) which led to a few interesting sequences where both companies were working on different elements and had to coordinate efforts across the Pacific with less notice or planning than either was used to (the rule seems to have been whoever finished their bit first got to dictate the look). Beyond the 3D, the big innovation in Avatar was probably the wholly CG explosions that ILM developed. Essentially a further evolution of the fluids dynamics engine it had written for such films as Pirate of the Caribbean, it worked on the principle that gasses and fluids behave in a roughly analogous manner in that the medium is relatively uncompressable.

Up till then, explosions in the VFX industry had still been practical elements initially, albeit latterly often with CG elements added in afterwards. And so finally, after many decades, on the planet Pandora one of the last bastions of the in-camera effects shot was breached...

The state of the art?

We finish with The Hobbit: An Unexpected Journey (2012) which details both what can be done with VFX and equally what shouldn't be done. On the good side you have a lot of fine work, as Weta has fine-tuned its processes over the intervening decade. Serkis was on-set throughout as Gollum, and being motion-captured at 48fps no less, and there are some truly astonishing sequences and character animations (which will reach their apogee in the next instalment when Benedict Cumberbatch’s movements are mapped onto the dragon Smaug’s body). Then there were some very clever workarounds undertaken to get over the fact that forced perspective doesn’t work with stereo 3D. In one Bag End sequence, for instance, the dwarves were shot on the Bag End stage where everything was scaled by 30%, while Gandalf was on a greenscreen stage. Both sets of actors had earpieces in so they could hear direction and visual props that would be painted out in post were dotted about so that they could reference where they should be looking. Then Jackson simply directed it using slaved cameras and a realtime low res composite in four takes (though reportedly it then took nearly a year to put together in post).

On the bad side though you have the inexplicable, unnecessary and painfully bad sequence of Radagast the wizard driving his rabbit-pulled sled across the landscape to distract the Wargs from attacking the questing party. It looks terrible, goes on far too long, and you wonder what on earth the makers were thinking letting it put into the wild as it’s hardly integral to the plot.

And that is appropriately enough where we finish the linear history of VFX in cinema, with the knowledge that just because it can be done, doesn’t always mean that it should be. In the next two parts we will look more closely at the optical and digital techniques – cameras, tricks, lenses and software – that the industry has developed over the years.

Read part 3

Read part 5

Tags: Post & VFX

Comments