<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Rendering - thanks to GPUs - is speeding up faster than Moore's Law

6 minute read

(c) Maxime Roz Media, Matte Real

Realtime rendering is the hot topic in CG animation and vfx right now, thanks to a new breed of rendering solutions that harness the power of the latest generation of graphics processors. Launched just four months ago and promising ‘ludicrously fast 3D rendering’, Redshift is already causing a major stir amongst professional artists and CG enthusiasts.

Fans of Moore’s Law may wonder why rendering for animation and visual effects has remained such a laborious process since truly entering the mainstream some 32 years ago with the arrival of Tron and Star Trek II: The Wrath of Kahn. But of course every leap forward in processing power has been more than matched by the desire to create ever more ambitious imagery. Increased model fidelity, higher scene resolutions, plus more physically accurate lighting and shading calculations all make the typical computer generated scene in even the most modest TV show hundreds of times more complex than those featured in Hollywood blockbusters back in the 1980s and early ’90s. And so, even studios boasting the largest render farms often measure render times in minutes or even hours, rather than seconds or frames-per-second. Until recently, that is.

 

 (c) Tim Crowson, Magnetic Dreams

In the last few years it’s been GPUs (graphical processing units) rather than CPUs (central processing units) that have evolved most spectacularly, so much so that many rendering tasks can now be handed over from the latter to the former. GPUs can handle many tasks more quickly, they’re comparatively cheaper and they consume less power. Crucially, GPUs are also far more scaleable – with multi-GPU configurations allowing for hugely accelerated parallel processing of scenes. This development has ushered in a new era of realtime rendering, with several GPU-based renderers now challenging the status quo held by CPU-based stalwarts such as RenderMan. In contrast to the prolonged rendering times associated with these traditional rendering solutions, this new breed offer interactive scene viewing (albeit with fully raytraced views only usually resolving on screen when the scene remains still for a time).

Panos Zobolas, CTO at Redshift Rendering Technologies, discusses the emerging GPU rendering scene and just what makes his company’s offering unique.

 

 Q: There’s a lot of talk about realtime rendering, but in reality what does GPU-accelerated rendering deliver?

A: We actually generally prefer to avoid the term ‘realtime rendering’. Realtime suggests instant rendering, and none of the existing GPU renderers do that, at least not unless the scene is extremely simple. In fact some of them can still take several minutes or even hours to render a clean frame. For this reason we prefer to stick with the term GPU rendering, or GPU-accelerated rendering.

 

 

We’re very bullish on the future of GPU rendering. GPUs are getting faster, faster than CPUs. Advancements in CPU performance have been slowing down, probably because there aren't too many 'killer apps' requiring 8 or 16-core CPUs, whereas the videogame market continues to feed demand for better and faster GPUs. The launch last year of the latest generation of videogame consoles like the XBox One and Playstation 4, as well as the rising popularity of 4K displays, have been driving the continued mass market development of higher performance GPUs with more onboard memory. This has a trickle-down effect providing massive GPU computing power at consumer-level prices, directly benefiting GPU rendering.

 Q: So just who do you see as the target market for Redshift?

A: Right from the beginning we developed Redshift to be a final frame renderer suitable for the broadcast and film markets, rather than as a previs tool. That being said, we also see a place for Redshift in adjacent markets like architectural, medical and engineering visualisation.

Redshift currently integrates with Maya and Softimage so naturally most of our customers are in the media and entertainment sector. The majority of them are small-to-medium studios or freelancers. Redshift is either already being used or currently being evaluated by some of the largest studios in the industry. Most of them are traditional Renderman, Arnold or VRay shops that have built render farms and pipelines around these products. Making Redshift part of their toolbox says a lot, both about Redshift's capabilities and the industry's changing perception towards GPU rendering in general.

To date Redshift has been used in a number of broadcast, print and web ads, as well as music videos, medical/engineering visualisations and even some architectural visualisations. We're currently developing a 3ds Max plugin to reach even more users in the architectural visualisation space. Archviz is notorious for its long rendering times, especially with interior shots, so we believe that archviz users will particularly benefit from and enjoy Redshift's rapid iteration/rendering capabilities.

 

 

 Q: Do you believe that a GPU renderer like Redshift can replace the more established CPU-based rendering solutions?

A: If you ask most of our customers, they will say yes! In fact, the majority of Redshift customers did not come from other GPU renderers but from CPU renderers. We believe that mainstream CPU renderer users who are looking for a GPU renderer that has a familiar feel and workflow will feel right at home with Redshift.

Do we think that Redshift is going to be the perfect solution for everyone? Not at this stage. There are still a few missing features which will be a dealbreaker for certain users. For example, some users need to be able to author their own shaders. Others work on scenes that far exceed 100M unique triangles and don't want to suffer any performance degradation because of this. Or they might have considerable hardware and time investments in a particular CPU renderer and aren't ready to let it go. But we’re working aggressively to close the CPU-GPU feature gap and remove these adoption barriers, allowing everyone to finally make the jump to GPU rendering.

 Q: Redshift offers biased rendering (and so renders with some concessions to physical realism in the interests of efficiency), whereas the other GPU solutions currently out there favour an unbiased approach. What does that mean in practice?

A: Redshift is a physically based, biased renderer. It can handle concepts such as light energy conservation and can use physical (real-world) units for cameras, lights and surfaces, which greatly simplifies the process of rendering of realistic images. The ‘biased’ part means it provides the user with a few optional lighting tech 'shortcuts', which might be slightly less accurate but significantly faster to render. It also means the user has the ability to make camera/surface/lighting tweaks that can break the 'physical correctness' and help achieve a specific look or style. We believe this combination offers the best balance between performance, quality and control.

 Q: While GPU technology has evolved hugely in the last couple of years, there’s still quite a disparity between the memory available for standard CPU rendering and GPU memory sizes – graphics cards are only just shifting from 12Gb to 16Gb, compared to 256Gb typically available for CPU rendering. How much of a challenge does this pose?

A: There are certainly some remaining technical limitations to be dealt with, with available GPU memory putting a hard ceiling on the number of triangles and textures that can be used in a scene in almost all GPU renderers. Redshift overcomes this limitation by allowing the GPU to access the main system memory. This is called ‘out-of-core’ rendering. However, in some cases, out-of-core rendering can come at a performance cost. Out-of-core texture access works extremely well, so with Redshift, it's possible to render scenes with several gigabytes worth of texture data without negatively impacting performance. Out-of-core geometry access, on the other hand, can impact performance.

To mitigate this, we recently introduced some optimisations allowing Redshift to fit approximately 100-to-110 million unique triangles within 4GB of GPU memory. On a GPU with 6GB of video RAM this still leaves roughly 2GB of space for other types of data such as textures which, as I mentioned, can get away with a much smaller ‘in core’ footprint without impacting performance in a significant way. So Redshift can go out-of-core when it needs to, but can also render some surprisingly complex scenes at full speed on today's commodity hardware. Also, future advancements in GPU hardware will reduce any performance impact of going out-of-core.

 

 (c) Magnetic Dreams

 Q: Give that the product has only been out of beta testing since April, what core features would you say Redshift still lacks and what additions can users expect further down the line?

A: In terms of missing core features, I'd say the most important is the ability to render volume data such as smoke and fire. Redshift will support the native DCC [digital content creation] capabilities for these as well as open standards such as OpenVDB which are rising in popularity. An open shader SDK and possibly OpenSL support is another big one.

Adding support for more DCCs is very high on our list. We have a 3ds Max plugin currently in development. We're also in the planning stages for supporting other DCCs like Cinema4D, Modo, Blender and Houdini, and we’re currently modifying our software developers kit to accommodate that. At the same time, we’re wrapping up some loose ends in our Maya integration, like supporting XGen and scene assemblies. We're also currently revamping our proxy system to make it even easier to export and externally reference sections of the scene. And we're looking into expanding our procedural texture node support as well as introducing native Substance support. At a more basic level, we’ll also be adding support for Linux and Mac OS.

Optimising memory access is also a topic of ongoing research here at Redshift. Our goal is to further reduce or completely eliminate those types of bottlenecks. We have many ideas on that topic and hope to introduce some considerable improvements during the course of the next few months.”

 

A fully-featured, watermarked free demo of Redshift is now available. The full version retails at $500 for a node-locked licence and $600 for a floating licence.

 

Tags: Post & VFX

Comments