<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

SIGGRAPH 2021: Here's a summary of this year's event

Image: SIGGRAPH.
4 minute read
Image: SIGGRAPH.

Last week saw the virtual version of the SIGGRAPH conference take place. Phil Rhodes summarises what he saw.

Ah, SIGGRAPH, that yearly festival of computational one-upmanship in which “theirs” is compared to “ours” and might, in a few years, become available as an expensive After Effects plugin. This year, the conference is brought to you by the words “neural,” as in “ACORN: Adaptive Coordinate Networks for Neural Scene Representation,” and “learning,” as in “Learning Active Quasistatic Physics-based Models From Data,” as if a computer might learn anything from no data at all.

From this we might gather, as we have before, that machine learning is either a panacea likely to massively reduce the time we all spend staring at progress bars, or just the latest in a ticker-tape parade of buzzwords – or, more likely, a reasonably-useful compromise somewhere between the two.

Having established a suitably cautious degree of optimism, then, let’s browse the conference schedule. The first thing that catches the eye – with the valuable exception of “morning coffee,” a crucial prerequisite for technical papers at 7am – is a presentation of some new research about “Denoising and Guiding.” Noise reduction is an attractive prospect to both CG people, who can have faster renders with noise reduction, and camera people, who can have faster cameras with noise reduction.

Volumetric blobs

Soon after comes a session about new cloud and lighting simulations in Unity. The word “volumetric” often isn’t that well-defined; after all, a volumetric sphere is, give or take shading, just a circle by the time it’s rendered to a 2D image, and realtime simulations have been drawing something like volumetric clouds using clusters of billboarded blobs for quite a while. In modern use, the term tends to refer to something much closer to photorealism, and as such Unity’s new features should make for some very attractive flight simulations.

Unity isn’t always the choice for LED volume graphics (which sounds so much better than “clever back projection”), but improved realism clearly can’t hurt its usefulness. Given the session “New Tools for Cinematic Creators” which was presented later in the day by the Unity people, that thought has clearly occurred to quite a few people in the field.

To linger in the world of realtime interactive entertainment (that is, video games), consider also Cyberpunk 2077, a game that was the subject of several technical presentations around the subjects of transparency, shadow rendering and the simulation of area light sources. Skipping the controversy that arises from the game’s bug-beset first few months, it’s worth recognising that recent releases often have sufficient polish that, at first glance, they might seem to compete with prerendered content – that is, games are starting to look like Pixar movies. At least, older Pixar movies.

We should be clear that things rendered using APIs like Direct3D and OpenGL generally only look as if they’re matching the likes of Maya and Renderman. It happens because there’s a lot of shortcuts at play, which limit what can be done; old hands can usually spot the fnords. The cleverness of those shortcuts is very much the subject of this sort of discussion. With that in mind, consider the talk given around midday on Tuesday about the Vulkan API, designed very much to usurp the likes of Direct3D – though GL is a tougher subject for replacement, as it’s already survived decades of considerable upheaval across the field in which it operates.

The two previously-separate fields of realtime and prerendered entertainment (games and movies, basically) came together in a presentation on the colour management system OpenColorIO, which involved people not only from Autodesk and Epic Games but also Netflix. Currently at Version 2, the system seems to have picked up some popularity across CG (Blender, Maya, Arnold, Houdini), compositing (AE) and game development (CryEngine, Unreal Engine) worlds. It’s not a new idea, tracing its history back almost twenty years, and as such it might not be entirely fair to complain that it’s not so much a colour management system but yet another colour management system. Either way, it has full knowledge and understanding of ACES, which means that things using it should slot neatly into workflows aimed at games or film and TV work, or at least as neatly as anything in colour management ever fits together.

AMD vs Intel

Something that stood out in the 2021 lineup purely for its uniqueness was a session entitled “The Impact of AMD Processors in Media & Entertainment.” While this session was presented by an equipment distributor which might have a vested interest in AMD’s own success, the idea of anything other than Intel being promoted at this level for the rendering and workstation market is eyebrow-raising in itself.

This is, to be fair, not the first time that the world’s biggest CPU manufacturer has found itself on the back foot. Recall that Intel did not invent the concept of x86-64, and there was a time in the 2000s when AMD’s processors were noticeably more cost-effective than those from across the aisle. Back then, though, everyone seemed secure in the idea that nobody ever got fired for buying Xeons. Things seem to have changed, which, alongside soaring interest in ARM-based, high-performance CPUs, might be yet another reason for Intel to feel nervous.

Pushing the other way, Intel’s name appears next to several quite frankly Intel-friendly talks, one of which was titled “Advantages of Powerful CPU Rendering” and described as “insights… on how CPU based rendering still delivers the best scalable and most stable solution for professional demands.” With comprehensive GPU rendering now increasingly old news even in non-realtime renderers (at least in most software, with a brow-furrowed glare at the holdouts) that might seem a little like a tardily-slammed stable door, although it’s probably still true to at least some extent.

Beneath the glossily-sponsored presentations there were of course the usual stack of technical papers. Deciphering the titles usually reveals that most of them describe faster, cleverer versions of things we’ve seen before and would therefore like to be available right now; until then, ploughing through dense pages of algebraic expressions is a task best left to the PhD mathematicians among us. Still, dry as it can get, at least the event has remained somewhat aloof from the broader obsessions of information technology. Blockchain technology, for instance, was mercifully absen… oh no, wait, that was on Monday. “How to upload your art to the blockchain.” Next year, synergy: neural blockchain?

Tags: Technology computing

Comments