Pixels are dead. Long live Voxels.

Written by David Shapton

Carnigie Mellon University3D Mesh Reality

If you’re getting bored with the daily diet of 4K, 48FPS and stereoscopic announcements (and I hasten to add that I’m not!), then here’s something that might at raise at least one eyebrow by a degree or two.

It’s Volumetric Video.


Cubic Pixels

Some researchers have taken a Microsoft Kinect (the thing you plug into your X-Box 360 that allows you to dispense with the controller in video games because you replace it with your own body) and upgraded the resolution of its built-in camera with a pair of DSLRs, with some rather hallucinogenic results.

The Kinect tracks your body’s position in three dimensions, and it does this through a multi-stage process, part of which of which is capturing your posture complete with depth information. Unlike traditional cameras that record an image in two dimensions (x and y) the Kinnect records a third dimension, depth, normally called z.

So every pixel captured by the Kinect has a horizontal, vertical and depth position. It exists within a volume, and not just on a plane.

This is extremely important if you are going to use your body as a games controller, because it allows Kinnect-savvy games to understand your position in three dimensions -  but in a different way to stereography. The Kinect creates an actual 3D model of your body in real time. Once it’s captured, you can “fly” around it as if it were a moving statue. (Of course, Kinect can’t see round the back, so the model may have a 3D “shadow” where there are no details at all.)

 

Getting behind the subject

There have been examples of this on the web now for some time but the results, while fascinating, have been decidedly low-fi, because of the Kinect’s low resolution imaging, but now, with the RGBD Toolkit, developed by the Studio for Creative Inquiry (part of Carnigie Mellon University) you can capture volumetric video in much higher resolution.

 

As you can see, there’s still a long way to go. But this is something of seismic importance to the Film and TV industry.

Because when this technique is refined (and there’s almost no limit to how refined it can get when you remember this is based on consumer games technology and not industrial-grade studio equipment) we will be able to film any scene, and then change the position of the camera in post production.

Camera position will become just another keyframeable attribute that you can change in the edit. Even the volumetric “shadow” behind the objects in the frame needn’t be a problem: just put another camera round the back and coordinate it with the ones at the front.

Flight of fantasy

Of course, once we exist as real-time high-resolution models, we can dispense with conventional motion capture and apply CGI “textures” to our bodies. We’ll be able to change our clothes, grow hair, and even look like long-dead actors or the native inhabitants of Pandora.

And why limit ourselves to three dimensions? If you embed a time coordinate in each voxel (which would become, I suppose, a “Tixel”), then there’s no need to even think about the order in which events happen, because the tixels themselves would just appear, in the right time, and at the right place.

Tags: Technology

Comments

Related Articles

3 July, 2020

Frame.io: What is the future for remote workflows?

Frame.io’s online Workflow From Home series, hosted by the company’s Global SVP of Innovation, Michael Cioni, has been a definitive look into how to...

Read Story

30 June, 2020

WWDC20 - macOS Big Sur and iOS 14 grow closer

While the Intel-ARM transition was the blockbuster news of last week's WWDC20, it was far from the only news with significant steps forward for...

Read Story

30 June, 2020

Metalenses: Is the future of the lens flat?

RedShark Replay: A new report in the journal Science points the way to a new lens of the future that could revolutionise optics entirely.

Individual...

Read Story