<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

This new system solves one of the most difficult aspects of motion capture

2 minute read

Vicon

Our palm lines and fingerprints are as unique as our face and the movement of our fingers and hands as expressive as our eyes but animating them has proved as tough a challenge as overcoming the Uncanny Valley.

Various attempts have been made over the years to track full five-finger movement including the use of haptic gloves. Attempts are most problematic on low camera count systems, as markers frequently become blocked by one another. As a result, animators usually create the gestures in post production, often a time-consuming process disjointed from the actor’s initial motion capture session.

A new technique from mocap rig specialists Vicon intends to achieve not only precise hand analysis down to individual knuckles but also the ability to stream data directly into game engines which are the new must-have production tool for making computer-generated content for movies or VR.

“Capturing full five-finger motion capture data has been at the top of customers’ wish lists,” says Vicon’s Tim Doubleday. “This is a milestone for motion capture.”

For the latest release of its VFX software Shōgun, Vicon worked with storied content creation facility Framestore to perfect finger tracking, based on a dense 58-marker set capable of tracking even minute knuckle movements. Users can choose a reduced set to animate the fingers. That data can then be combined in real-time with a user’s digital rig, producing a fully animated digital character capable of intricate movements, from writing a letter to playing an instrument.

A process that used to take weeks of painstaking animation can now be done instantly, it is claimed.

As significant, users can also record the data (which includes full skeletal mocap) directly into a game engine (Unity or Unreal Engine 4) and “within seconds” see an animated version of their character.

Virtual production is all about collapsing the whole production process – from previz to postviz or pre-production to post – into one realtime multi-discipline collaborative creation.

Vicon’s technique is another notch toward this end goal where directors, cinematographers and VFX supervisors can mix live action with (photoreal) animated sets, objects and characters in realtime.

Ultimately, there will be no need to wait with rendering (since this will be done instantly); character animation tweaks can be made on the fly, fully realised and in full 360-degrees for creative decisions such as blocking, composition and camera movement to be made.

The Lion King on release is already advanced down that track in its use of VR headgear and simulated environments for director Jon Favreau and his team to tell a story which is essentially built in a studio with no physical sets, props or actors (save the pre-/post recorded voice talent).

Shōgun 1.3 – which is in beta but due for release later this year – further includes the ability to export data using the Universal Scene Description format (USD) to iOS devices where it’s supported by Apple’s AR kit technology for use in augmented reality.

USD is already used by major VFX companies including ILM, Framestore and Pixar, and opens the door for AR character animation (digital avatars) appearing via iPhones or other AR devices.

The software also uses machine learning to automatically identify any missing mocap data in the performer’s movements and fill it in, based on the expected movement. The example given is of a waving hand missing a marker on a finger. The software will add that finger in “without forcing artists to spend precious time individually filling in these gaps one by one.”

Tags: VR & AR

Comments