RedShark News - Video technology news and analysis

Nvidia shows us how to reach the singularity

Written by David Shapton | Mar 25, 2022 9:00:00 AM

Nvidia's keynote this week was breathtaking in its scope and unfettered ambition to pave the way to an AI future.

Do you remember back in the early 2000s when GPUs started to make a big difference to video editing? Instead of having to render a simple dissolve or absolutely any sort of effect, you could begin to do them in real-time. Since then, GPUs have reached unimaginable levels of power and virtuosity, and GPU computing has for a long time looked like the right architecture for not just video and graphics but all kinds of processing chores that a CPU would struggle with.

And twenty years later, GPUs are about to do it again, not just to video, but to everything.

You should always put a note in your diary for Nvidia's keynotes. Their presentations are always sumptuous and sometimes breathtaking. There's no need to resort to PowerPoint when your company makes the processing tools at the core of nearly every creative workflow.

But it's a sign of the times that the keynote spent very little time on graphical computing, and almost all of it talking about Machine Learning and Artificial Intelligence. Omniverse was also featured at length, and towards the end of the presentation, we learned how AI and Omniverse are destined to be interlinked, with each enabling the other.

Omniverse and the metaverse

What's Omniverse? You could say that it's Nvidia's Metaverse technology, but it's not quite that. If anything, Omniverse, a micron-accurate and physically modelled version of reality is more like the metaverse should be. To put it another way, if you were going to build a metaverse on a single technology, Omniverse would probably be it.

If I had to say why Nvidia has built such a formidable array of tools for future stuff, I'd say it's that they understand scale and scalability. Essentially, that's what a GPU is: relatively simple computing cores built at scale. Of course, Nvidia is still doing that but also designing blazingly fast interconnects between multicore chips and between computers that appear to be the answer to what happens when "traditional" interconnects like PCIe aren't fast enough. It's the same thing with networks: build your own if you want the fastest networks switch on earth. So when you're building for the future, you're building for exponential change; scale and scalability is the foundation for that.

Here's an example. Jensen Huang, Nvidia's President and CTO, told us that the scientific community estimates supercomputers that can accurately predict regional climate change will need to be a billion times faster than the best we have now. Achieving that isn't impossible for Nvidia, but it is working on the assumption that AI will help it build that device.

What Nvidia has also got right is building a portable technology stack. Each of its "platforms", whether they're for ML, AI, Natural Language Processing (NLP), robotics or driverless cars, can run on almost all of its current hardware. That's not to say that it performs as well on a single GPU as on a cluster of supercomputer "Pods", but it does mean that just about everyone can get a flavour of Nvidia's vision of the future, no matter how modest their computer.

One statistic stood out because it puts paid to the idea that technology is slowing down. In the last ten years, Nvidia's AI computing platforms have increased in power by a millionfold. Jensen Huang described this as "the compound effect of Nvidia's technology". He said that "companies are becoming intelligence manufacturers" in the same breath.

What are the implications?

The implications of that are too immense to go into here, but what you can glean from it is that the whole world of business (and entertainment, leisure, medicine, biology, science and so on) is going to depend on high-quality AI, which in turn is going to depend on precisely the type of computing that Nvidia is designing right now.

For now, let's focus on just one aspect of the Nvidia keynote. During the presentation, we saw several examples of AI training itself. Learning through thousands of examples of prior knowledge or through simple trial and error, more intriguingly. Foremost in this technique was the idea of a Digital Twin.

It's certainly not a new idea, but what is new is the scale and accuracy with which it is now possible. Omniverse is, essentially, "digital twin software", and, for me, it's the best embodiment of the metaverse yet. It has the crucial capabilities: photorealistic, real-time rendering; widely implemented interchange languages for scene and material description; precision measurements, and - absolutely crucially - accurate physics. In Omniverse, you can build a car from its engineering files, and it will behave in the virtual environment virtually indistinguishably from the real-life embodiment of the same vehicle.

It's being able to not just look like the real world, but to behave like the real world that makes Omniverse so powerful in ways you might not have imagined.

Robotics

One of Nvidia's areas of focus for the future is robotics. Not necessarily the humanoid type right now, but almost certainly with that as an end goal. We've been building robots for decades, and for a long time, the primary goal was to stop them from being terrible. That took ages, but it's starting to look like we've succeeded. The videos from Boston Dynamics of choreographed robots in perfect sync with the music are awe-inspiring, even if they're not exactly as they seem, but the sheer mechanical virtuosity of Boston's mechanisms is impressive.

The stage beyond "not being terrible" is "getting rather good", which is where we're at today. And Nvidia may have just sped everything up with a technique that involves Omniverse.

The GPU company has been experimenting with self-learning humanoid games characters. These physically-modelled humanoids live in Omniverse and have physical attributes that would make sense and be consistent in the real world. They weigh what a humanoid would weigh, and, as they move around, they behave as if their limbs were real. They were simply given enough time to learn from their mistakes to train them. And they did learn. We saw examples of games characters that could move just like a human being. It didn't look artificial, and it didn't look "robotic". It just looked natural.

Next up was a "thing" that looked a bit like one of Boston Dynamics' robotic "dogs". But with wheels. Nvidia made an omniverse version with all the physical capabilities of the real-world device. It, too, "learned" to do things via trial and error in its virtual world. Eventually, it ended up with much more capability than you might imagine an awkward dog on wheels had any right to have. It could even stand up on its hind wheels.

Then came a pivotal moment of realisation, when Nvidia explained that the learning accumulated by the virtual, Omniverse dog, can be transplanted into a real-world, physical robot dog on wheels, and it would have precisely the same level of capability, even though it might only have been turned on for the first time.

This, it seems to me, is an exceptionally important demonstration. It shows how robots in the real world can be taught how to behave by a "learning avatar" in a virtual world.

This is a huge realisation. It's a new and vast application for the metaverse: as a place to train AI. It's risk-free and doesn't need any physical space. You'll also be able to clone an almost infinite number of the "learning avatars", so the learning process could be almost infinitely fast.

Everything we've been talking about so far is beginning to look like the foothills of the singularity: the point in our civilisation when technology will take-off faster than our ability to understand it.

That's it for now. Try not to have nightmares.