06 Sep 2014

Depth sensing cameras could revolutionise CGI

  • Written by 
The Kinect controller - based on 3D camera technology The Kinect controller - based on 3D camera technology Microsoft

Index

The Z List: One of the great technological leaps forward that failed to really catch on the first time round was the 3D camera - a unit that could automatically capture the z-axis (the "depth axis) of a scene to create depth map information. Perhaps it’s time for the industry to look at them again.

If there is one aspect of films that makes huge leaps year on year it is computer generated imagery. Ever since James Cameron's “The Abyss” and subsequently “Terminator 2: Judgement Day” exploded onto the scene, seamlessly integrating CGI with live action footage, we have expected our CGI with a firm dose of photo-realism ever since.

With ever more powerful computers being available to, well, pretty much everyone, this CGI revolution gradually made its way down the chain to be accessible to all of us. Anyone with After Effects and a suitable 3D animation package could try their hand at the sorts of special effects that were once the preserve of million dollar movies.

There is one link that is missing in all of this, however. From the most expensive films down to the most lowly of independent movies, nothing has really come along to make compositing (the mixing controlled blending together of multiple layers) truly easy. I mean truly point and click easy.

It is true that software has become better at detecting outlines and giving us more tools to be able to extract objects from the background, and moviemakers have become more adept at shooting well lit green screen. What happens, though, if you are shooting with a real background and you wish to integrate CGI or other effects? What then?

Traditionally this means long nights hand tracing outlines and other archaic methods. For the animator it is about as much fun as having root canal surgery. Despite all the promises of amazing edge detection in the latest versions of After Effects for example, the software isn’t perfect, and it would be impossible for it to be so. After all the world is a complex place and a computer cannot possibly have the intelligence to know what an individual object in a scene actually is. At least not yet.

So what can be done about this? The answer I believe lies inside your Xbox Kinect device. The Kinect is not just a camera. The secret to how it works is that it creates a Z-depth map of the scene in front of it.

Z-depth maps have long since been used by 3D animators for compositing purposes and effects. They can be used by apps such as Photoshop to create depth of field effects for pre-rendered 3D animations in post for example. This is because a Z-depth map creates a black and white render of a scene and depicts the distance of the surface from the camera with the tone of its shading. The furthest away point is rendered as black, while surfaces that are closer to the camera are lighter in shade, going towards white.

Such depth maps are now also used to assist in creating 3D conversions from 2D sources.

So what on earth has this got to do with the cameras that we all use? As the Xbox Kinect, as well as other similar devices show, it is possible to combine depth mapping with a standard camera. Such devices have many names, including Range Cameras and RGB-D cameras to name a couple.

If we can capture an accurate depth channel at the same time that we shoot our footage we will immediately have an extremely powerful tool for post production work.



« Prev |


Simon Wyndham

Simon Wyndham is the Editor of RedShark News, a professional cameraman and video producer of 20 odd years. With a background in indy feature making, he has been writing camera reviews and tech articles for as long as he can remember. When he isn't producing bread and butter corporate videos he can be found hucking the gnar on rivers whitewater kayaking and adventure sports filming.

Website: www.5ep.co.uk

Twitter Feed