<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Hardware Raytracing: Has it been a success or failure?

2 minute read

nVidia

When nVidia introduced the Turing GPU with hardware ray tracing and tensors, there was a lot of debate about whether or not it was a good idea. On the one hand, these features offered the potential for speeding up some tasks by providing dedicated hardware for them, but on the other both features also used die area that nVidia could otherwise have used to add more general purpose compute units.

At launch, there were only a few games using the RTX cores, and the performance wasn’t quite up to the task for realtime rendering without sacrificing resolution. Gamers generally found that the Deep Learning Super Sampling often lead to a soft image, while the real-time ray tracing improved the image quality overall, and as a result many gamers with RTX GPUs preferred to enable hardware ray tracing and lower the game’s resolution to keep the frame rates up.

For film post production, ray tracing is largely irrelevant, but the tensor cores weren’t. Since nVidia debuted its tensor cores in the earlier Volta GPU, its machine learning ecosystem was already much more mature than its ray tracing ecosystem. There were several toolkits available such NumPy and TensorFlow that provided pre-built data structures and routines implemented using the tensor cores, and several white papers and several trained neural engines for things like image completion, enabling third parties to implement features like AI based object removal.

One of the big benefits of these libraries and toolkits is that they not only took care of implementing tensor routines and data structures, they also abstracted the hardware. These toolkits will take advantage of tensor hardware if it’s available, but if not fall back onto general-purpose GPU compute if that wasn’t available to the CPU. Each fallback would of course incur a performance hit, since general-purpose GPU compute being used to run a neural engine isn’t available for image processing for example, but the neural engine would still work as it should.

Gaining popularity

Gradually, more and more gaming engines are adopting hardware ray tracing, using Monte Carlo techniques and taking advantage of nVidia’s tensor accelerated AI denoise features to keep the number of rays down and enabling higher frame rates.

The big news was when 3D rendering engines using nVidia’s Optix ray tracing toolkit started appearing. Renderers such as the Blender Foundation’s Cycles showed a factor of two increase in speed when using Optix on an RTX GPU compared to using the generic GPU compute on the same GPU. Luxion’s KeyShot shows a similar speedup when using Optix.

Even bigger news however is that both the next generation Xbox and PlayStation gaming consoles were getting hardware ray tracing. It wasn’t much of a stretch to assume that the company making the CPUs and GPUs for both consoles, AMD, would therefore incorporate hardware ray tracing into its next generation GPUs, which AMD has also confirmed recently. Intel has also confirmed that its upcoming Xe GPU line will incorporate hardware ray tracing.

In spite of its rocky start, hardware ray tracing is definitely here to stay, and by the end of 2020 it will be ubiquitous.

Tags: Technology

Comments