<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

NVIDIA has just made super slow motion possible with any camera

1 minute read

NVIDIA

A team of researchers claims to have cracked the secret of making videos shot at a lower frame rate look more fluid and less blurry when played back at a higher rate.

The difference between individual frames of video is pretty negligible, the result of either the camera moving or an object moving very slightly from frame to frame. Compression techniques exploit this fact by using mathematical algorithms to predict (interpolate) motion in a frame – given knowledge of previous or future frames – and to reduce the amount of data required.

However, unless you record a high enough number of frames, slowing down footage for a slo-mo replay can appear nigh on unwatchable. High-end pro cameras like the Phantom VEO 4K can capture full resolution 4K images at 1000 frames per second (fps) and do an unbelievably good job, but at $60,000 a pop, these are used only for top-end natural history, sports or corporate promo applications.

Unpredictable events

While it is possible to take 240-fps videos with a cellphone, many of the moments we would like to slow down are unpredictable – the first time a baby walks, a difficult skateboard trick, a dog catching a ball – and, as a result, are recorded at standard frame rates.

Likewise, recording everything at high frame rates is impractical for mobile devices – it requires much memory and is power intensive.

But could high-quality slow-motion video be generated from existing standard videos? A team of researchers claims to have cracked the code.

“Our method can generate multiple intermediate frames that are spatially and temporally coherent,” the researchers announced. “Our multi-frame approach consistently outperforms state-of-the-art single frame methods.”

Interpolation

For instance, generating 240-fps videos from standard sequences (30-fps) requires interpolating seven intermediate frames for every two consecutive frames. In order to generate high-quality results, the math not only has to correctly interpret the motion between two input images but also understand occlusions.

“Otherwise, it may result in severe artefacts, especially around motion boundaries, in the interpolated frames,” explain the researchers.

They used over 11000 YouTube clips with 240-fps, containing 300K individual video frames, to train the artificially intelligent system and have had their findings – published last November - officially backed by GPU-maker Nvidia this month.

The system makes use of Nvidia Tesla V100 GPUS and its cuDNN-accelerated PyTorch deep-learning framework.

Now you can take everyday videos of life’s most precious moments and slow them down to look like your favourite cinematic slow-motion scenes, adding suspense, emphasis, and anticipation,” suggests Nvidia.

Tags: Technology

Comments