<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Nvidia's staggeringly powerful new GPU means the future just got closer by ten years

4 minute read

Nvidia

Siggraph, the huge trade show for the graphics industry is taking place right now. This year's Nvidia keynote was a blockbuster, and summed up almost perfectly how and why technology is accelerating.

Ask most people what they think is happening to computing power and they'll say - rightly - "it's slowing down".

That's because CPUs, which have always relied on Moore's law to get faster, are simply not improving at the rate that they used to. We've all been used to several decades of phenomenal growth in CPU speed and power, but now that's all finished. It's been scuppered by physics. You can't make components smaller than an atom, or even close to it. There's still a tiny bit or room for improvement, but on top of that you have to somehow dissipate all the heat that's generated by all those unbelievably small components. Lots of heat in a very small space is a big problem, and it's brought the whole thing tumbling down.

But while Moore's law might be over, it's not the end of accelerating technology, because at the same time as its familiar upward curve has tailed off, another has taken over, and left it for dust. GPUs are made of silicon too, and they are subject to the same laws. But what's different is the way they work. Computer graphics, and, as it turns out, all kinds of applied computing tasks, are ideally suited to parallel architectures. You can see why: with video, there's no reason why every pixel on the screen shouldn't have its own dedicated processor: they all have to display the result of calculations at the same time. "One after another" or Serial Processing doesn't work for graphics.

So, while GPUs have benefited from Moore's law in exactly the same way as GPUs, they're not bound in the same way. GPUs can "spread out" across ever larger pieces of silicon. If you want a more powerful GPU (and this is an almost criminally reckless oversimplification) you just duplicate more GPU cores on a bigger chip.

This explains a lot of the progress in GPUs. You can see from the embedded video of Nvidia's keynote that as CPUs have tailed off, GPUs have accelerated.

Huge jump

This, alone, is impressive. But there's much more to it than that. The "jump" in the title of this piece wouldn't have happened through conventional GPU growth by itself. There are at least two fundamentally different (and somewhat new) technologies at work here as well, and these don't just add to the existing techniques, they multiply them.

I urge you to watch the whole Nvidia clip. It's breathtaking. What it shows is three technologies - GPU Compute, Dedicated Ray Tracing and Artificial Intelligence/Machine learning - all working together to boost the abilities of a single graphics card to a stage that we wouldn't have expected to be for another five or ten years.

That's even more impressive than it sounds, because in an era of accelerating technology, the next ten years might arguably (and, ultimately, demonstrably) advance us by a hundred of our past years.

It's hard to put this in perspective but perhaps the best way to do that is to outline the problem that it solves.

For years, software developers and hardware manufacturers have been working together to make photo-realistic computer graphics. Ultimately, the only way to do this has been to use Ray Tracing. It's a technique that's been understood for a very long time but the issue with it is rendering times. Tracing the path of every ray, through every medium and reflected from all types of surfaces is a herculean task. It's massively processor intensive. It is easily, by some margin, the slowest way to render graphics.

Frustratingly, though, it's also the best. If you want real-graphics with true, global illumination, there's no alternative to Ray Tracing.

Most of us don't see the computing power needed to make blockbuster animated movies. It costs $millions - both in hardware costs and the power to keep it all running. And this is without Ray Tracing, which is still too slow and costly to use for a whole movie.

Whoever solves this problem, is therefore likely to sell a few units.

Ray Tracing in real time is even harder. Almost unimaginably so. And so we come to the big leap. Nvidia has designed a GPU that does Ray Tracing in real time. You can see it happening in the video below (it starts at about 20 mins into the presentation). It really is worth watching all of this, because it absolutely defines the state of the art.

Turing

The new chips from Nvidia, with a new artchitecture called "Turing", have three distinct areas of functionality, that all work together. There's the "traditional" GPU-type computer area that handles all the shaders; a new section dedicated to Ray Tracing; and finally there's an area for "Tensor Processing". This is essentially where the AI takes place.

Jensen Huang, CEO and Founder of Nvidia said that the Turing products have taken ten years of research and are only able to work because of the interaction of the shaders, the Ray Tracing and the AI. It is only by having all of these in a single chip that it's been able to Ray Trace in real time and effectively bring the future ten years closer.

It's easy enough to see how conventional GPU compute combined with dedicated Ray Tracing might work. But AI? This is perhaps the most surprising aspect of all, with wide-ranging implications.

Nvidia "trained" their AI by showing it highly representative images, but "jittered" - in other words: degraded by a known amount. Given enough of these examples, the AL/ML (Machine Learning) is able to say "I see that's a noisy (X) and I know what a perfect (X) would look like, so I will draw one". Of course this is a vast over simplification but it's not far from the actual process in essence.

The sheer scale of this achievement is staggering. It's one of the most complicated chips ever made, with over eighteen thousand million transistors (18 Billion transistors). It massively speeds up render times and uses less power.

And it shows that we're living at a time when "exponential" is no longer an adequate description of the way that technology is headed. The reality is that we're set for an unpredictable future where, out of the blue, technology will take huge and unexpected leaps.

It's going to change everything, sooner, probably, than anyone expects.

Tags: Technology

Comments