<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

This is the most amazingly realistic realtime CG human we've yet seen

2 minute read

Replay: This is some of the most remarkable 12:34 you will watch for quite a while, as Doug Roble, Head of Software R&D at vfx powerhouse Digital Domain shows off how to drive a digital human in absolute realtime and with a surprising amount of nuance.

TED/Digital DomainFrom the realistic to the goofy, the two Dougs are in almost perfect sync

Okay, maybe it won’t be remarkable to some people reading this, realtime computing miracles happen all the time now. But to anyone who remembers the early days of optical motion capture and painstaking hours of intricately choreographed capture followed by intensive and mind-numbing editing to account for all the occluded data points where Limb A had moved in front of Ball B, it’s a bit of a revelation.

As Roble explains, though, for all the advances since those early days creating lifelike digital actors is still a difficult and compute-intensive task involving thousands of hours and hundreds of talented artists. This TED talk doesn’t just move the goal posts, it’s playing an entirely different game entirely.

18 months of work later, and you have DigiDoug (yes, that’s the name they came up with). And, to be honest, it’s difficult to see the join. Real Doug drives DigiDoug in realtime using an immersive body suit and a minimally rigged camera pointed at his face. It may look a bit ungainly but the results are impressive, the CG DigiDoug’s smallest nuance and gesture being indistinguishable from the real thing.

The project started off at the University of Southern California’s Light Stage where a serious amount of work was put in to capturing Roble’s face in exquisite detail.

“Once we had this enormous amount of data, we then built and trained deep neural networks,” he explains. “And when we were finished with that, in 16 milliseconds, the neural network can look at my image and figure out everything about my face. It can compute my expression, my wrinkles, my blood flow — even how my eyelashes move. This is then rendered and displayed up there with all the detail that we captured previously.”

The results, and this is the first time they’ve been seen in public are impressive enough. But DD wants to go further, remove the slight lag that still persists (about a sixth of a second) and make the DigiDoug absolutely indistinguishable from the real thing.

“If you were having a conversation with DigiDoug, one on one, is it real enough that you could tell whether I was lying to you?” he asks.

It's exciting tech, and Roble is enthusiastic about the future possibilities, both in entertainment and in real world situations, but he also acknowledges that there are issues.

“So this is where we are. We're on the cusp of being able to interact with digital humans that are strikingly real, whether they're being controlled by a person or a machine. And like all new technology these days, it's going to come with some serious and real concerns that we have to deal with.”

Frankly, deepfakes just got potentially very real indeed.

Have a look below.

https://www.ted.com/talks/doug_roble_digital_humans_that_look_just_like_us?

Tags: Post & VFX

Comments