NVIDIA has just made super slow motion possible with any camera

Written by Adrian Pennington

NVIDIA

A team of researchers claims to have cracked the secret of making videos shot at a lower frame rate look more fluid and less blurry when played back at a higher rate.

The difference between individual frames of video is pretty negligible, the result of either the camera moving or an object moving very slightly from frame to frame. Compression techniques exploit this fact by using mathematical algorithms to predict (interpolate) motion in a frame – given knowledge of previous or future frames – and to reduce the amount of data required.

However, unless you record a high enough number of frames, slowing down footage for a slo-mo replay can appear nigh on unwatchable. High-end pro cameras like the Phantom VEO 4K can capture full resolution 4K images at 1000 frames per second (fps) and do an unbelievably good job, but at $60,000 a pop, these are used only for top-end natural history, sports or corporate promo applications.

Unpredictable events

While it is possible to take 240-fps videos with a cellphone, many of the moments we would like to slow down are unpredictable – the first time a baby walks, a difficult skateboard trick, a dog catching a ball – and, as a result, are recorded at standard frame rates.

Likewise, recording everything at high frame rates is impractical for mobile devices – it requires much memory and is power intensive.

But could high-quality slow-motion video be generated from existing standard videos? A team of researchers claims to have cracked the code.

“Our method can generate multiple intermediate frames that are spatially and temporally coherent,” the researchers announced. “Our multi-frame approach consistently outperforms state-of-the-art single frame methods.”

Interpolation

For instance, generating 240-fps videos from standard sequences (30-fps) requires interpolating seven intermediate frames for every two consecutive frames. In order to generate high-quality results, the math not only has to correctly interpret the motion between two input images but also understand occlusions.

“Otherwise, it may result in severe artefacts, especially around motion boundaries, in the interpolated frames,” explain the researchers.

They used over 11000 YouTube clips with 240-fps, containing 300K individual video frames, to train the artificially intelligent system and have had their findings – published last November - officially backed by GPU-maker Nvidia this month.

The system makes use of Nvidia Tesla V100 GPUS and its cuDNN-accelerated PyTorch deep-learning framework.

Now you can take everyday videos of life’s most precious moments and slow them down to look like your favourite cinematic slow-motion scenes, adding suspense, emphasis, and anticipation,” suggests Nvidia.

Tags: Technology

Comments

Related Articles

24 May, 2020

Sensors need better technology, not more resolution

 

The Sensor of the Future

Replay: This was first published in 2013, but the discussion about sensor technology is as relevant as ever. Just how can...

Read Story

27 April, 2020

Maxon Cinema 4D S22 first release from the new subscription model

Maxon has updated Cinema 4D, the company's acclaimed VFX, 3D modelling, animation and rendering software, to version S22. This is the first release...

Read Story

27 April, 2020

LaCie Rugged drives are the perfect storage you need when going gets really tough [Sponsored]

There can't be many more extreme environments to be in than the Arctic tundra during a full white out blizzard. This is the sort of environment that...

Read Story