<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

NVIDIA shows off big speed increases at Siggraph 2019

3 minute read

NVIDIA

At SIGGRAPH 2018, NVIDIA revealed its Turing GPU architecture with its new ray tracing extensions and also bringing Volta's Tensor cores to the mainstream.

When the Turing GPUs reached the public, very few applications made use of either, and it was fairly questionable how well they'd work in practice. 

After several months, some AAA game engines gained support for both RTX and Tensor cores, using the RTX hardware to enable real time raytracing. Being a first launch of the technology, its capablity has its limits; in order to keep frame rates up gamers had to choose between accepting lower frame rates, more noise, or enabling NVIDIA's deep learning based noise reduction using the Tensor cores.

The results were a mixed bag, and the debate continued; enabling RTX enabled game developers to incorporate more realistic lighting effects into their games, but enabling the noise reduction softeneed the image. 

Meanwhile, not many other applications were using these new features. Among the few that did were DaVinci Resolve, using the Tensor cores to accelerate its new AI object removal feature.

Now at SIGGRAPH 2019 RTX is once again news.

Nvidia is showcasing its Optix ray tracing engine that provides an API to help developers take advantage of the RTX and Tensor cores in NVIDIA GPUs. Rather than being a render engine, it's a rendering toolkit that allows developers to build their own rendering engines using the Optix as a programmable toolkit that can take advantage of multiple GPUs without require special code from the developers. It can automatically combine GPU memory over NVLink, making it easy for developers to work with larger scenes, and also enables developers to take advantage of the Tensor cores the way that they want to.

What's more, it's free for commercial use. Nvidia wants developers using its technology, and it's going out of its way to make it as easy as possible.

Luxion's KeyShot, a high end standalong production renderer optimized for realtime rendering, supports physically based materials and lighting, ray tracing, and global illumination. In KeyShot 9, Luxion is introducing RTX and AI noise reduction support by taking advantage of NVIDIA's Optix library. Luxion will continue supporting its CPU based rendering engine, and let users choose which one they'd prefer simply by flipping a switch in the GUI. Already known for its speed and image quality, this will be a welcome boost for KeyShot users.

Nvidia has also been working with the Blender Institute to help build a new back end for its vaunted Cycles rendering engine. It uses the existing Cycles shaders and ray casters while using the Optix hardware engine to accelerate Bounding Volume Hierarchy traversals and calculating ray hits, and since nearly every current GPU feature in Cycles work with the new back end already, all you need to do is install an RTX enabled GPU and enable the Optix option to get the speedup. 

In preliminary tests the speedups are pretty significant. 

Also in the works is an update to use the Tensor cores to remove noise. Since Cycles uses a Monte Carlo approach to rendering, the render times become a tradeoff between noise and rendering time. As the renderer processes more rays, the amount of noise left decreases just like in a digital sensor with a longer exposure. While in production rendering there's no need for interactive levels of performance, the viewport renderer does need to be interactive. So the Blender Institute is incorporating the Optix AI noise reduction to enhance render times which while useful for production rendering will be expecially appealing for interactive viewport rendering.

Isotropix Clarisse has expanded its hybrid ray tracing core to enable RTX acceleration in its viewport renderer as well, and Otoy's Octane renderer is also using Optix.

NVIDIA is also showcasing RTX Studio laptops and mobile workstations made by its partners, enabling workloads that just a year ago would have required a desktop with huge processors and multiple GPUs to run well on even thin and light laptops. The more robust machines like the upcoming Acer ConeptD 9 and HP Zbook 15 and 17 are being aimed at higher end workloads with their large memory capacities that are far beyond what the Pascal based Quadro Mobiility GPUs allowed.

Launching new technology like RTX and Tensor cores is always challenging, especially in hardware. It requires significant R&D cost to design and validate the processors, and adds hardware to the chips that initially add no value, yet raise manufacturing costs because they consume die area. Most companies as a result balk at this, because they have to be willing to accept the overhead in order to pilot the technology, and also because they have to put in a lot of development and development support effort to enable 3rdparties to make use of the technology, or else it has no value, which means it's wasted money and effort.

For companies willing to take a longer view the way that nVidia is doing, it's a different story. With production renderers adopting RTX, software like Red's new SDK using it to accelerate decoding, and game engines making ever more use of it, it's priming the market for a next generation GPU which, if it follows AMD's progression will likely be manufactured using the same TSMC 7nm fabrication that AMD is using.

A reasonable guess is that NVIDIA will use the increased transistor budget to add more Tensor and RTX cores as well as more computing resources, and odds are now that AMD has pushed it to market, PCIx 4.0 support as well.

Tags: Technology

Comments