17 Aug 2018

RED uses Nvidia’s new Turing architecture for realtime 8K

  • Written by 
  • submit to reddit  
You want realtime 8K? You got it... You want realtime 8K? You got it... Nvidia

RED has flicked a switch somewhere in its code to allow decoding to be offloaded to Nvidia GPUs and the results with the new Turing architecture are blisteringly fast.

We’ve already written about Nvidia’s new Turing architecture that the company announced at Siggraph and how one of the most complicated chips ever made is going to produce an incredible hike in GPU performance. Even better though is the news that Nvidia and RED have been working behind the scenes to apply the awesome power of the forthcoming Turing-based Quadro RTX GPUs to the issues surrounding 8K image processing.

8K post was always going to be a leap and a challenge that was going to require some serious horsepower to surmount. What is genuinely jaw-dropping about what Nvidia and RED have been up to though is that it looks like the system they’re built lets users work with 8K footage in full resolution, in realtime at over 24fps using just a single processor PC with one Quadro RTX GPU.

Single processor PC. Single GPU. Yes, you read that bit right.

The temptation here is obviously to assume that the tests were done on the most expensive of the new GPUs, the $10,000 Quadro RTX 8000 with 48GB memory. However, according to Jarred Land on reduser.net, that’s not the case. “We have been able to achieve 8k @ 24fps on [the] P6000 during testing,” he wrote, “ and we are confident that the lowest end Turing will work well.”

The Quadro RTX 6000 ships with 24GB memory and has an estimated street price of $6,300, while the entry level RTX 5000 has 16GB and will cost a relatively ‘minor’ $2300. It’s unclear whether by ‘work well’ Land also means the ability to process 8K in realtime, but he also states that even 1080 Ti cards should see a massive improvement in performance.

“I put an older 1080ti in and ran the new code... realtime 24fps in 1/2 res looks pretty doable, although as a caveat the 1/2 resolution work in the code has not even really begun.

“This is on a pretty pedestrian low cost tower with a single core i9 processor and only 32GB ram and of course no rocket card.

And the reason that can all happen is that for the Turing architecture to be able to do this, RED has allowed GPU-based wavelet decode of REDCODE RAW files across the Nvidia board. Land says that both entropy and wavelet decode have now been offloaded to the GPU as well as debayer.

“All NVIDIA GPUs running CUDA will see a significant improvement - this improvement increases the efficiency of CUDA-based processing of R3Ds,” he wrote.

Transcoding is going to be boosted too. “All transcoding requires decode of the R3D first, speeds will be greatly improved but the encode itself will take the same amount of time,” he added. “Overall, the time required to export media will be significantly decreased, as the decoding/debayer has historically been the long pole.”

That’s not necessarily the end of it all either, Land stating that following the realtime decoding and grading of 8K, RED also expects there to be headroom in the decode performance to allow for effects layers as well.

As to when all this will happen, it looks like Nvidia is targeting December 2018 to release to NLEs for incorporation.

And it’s not over there, either. “The story is only half complete,” hinted Land in a reply on reduser. “There are a slew of new "consumer" Nvidia cards coming…”

 


Andy Stout

Andy has spent over two decades writing about all aspects of the broadcast and film industries for a variety of high-profile industry publications on both sides of the Atlantic. During that time the industry has moved from 4:3 SD to 16:9 SD to HD and now on to 4K HDR. He's getting kind of curious to see where it goes next.

Twitter Feed