<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Is a new type of computer emerging?

Pic:
3 minute read
Pic: Shutterstock

We may be witnessing the birth of a new and spectacularly powerful kind of computer and it won't come as a surprise to learn that this step-change in computing is related to AI.

We've all become accustomed to the daily news stream about computing technology. You could be forgiven for thinking that computers are changing all the time, and yes, they are. But now, the nature of that change itself is changing. If that sounds like a complete riddle, I'm not surprised. Allow me to explain.

The fact that our pocket telephones are now more powerful than supercomputers from a few decades ago is remarkable. But it's more than that: it's the most staggeringly, breathtakingly stupendous advance in technology that there's ever been. Until now, it's been enabled mainly by Moore's Law - which is not a law, but an incredibly prescient observation, that as the components in computer chips get smaller, you can pack more of them into a given area. So this means computers get more powerful for the same cost and at the same size.

Of course, that doesn't happen by itself. It takes talent, money and vision to drive these advances forward.

The need for speed

Speed is a beautiful thing. It means you can do more in a given time, and, often, that's all you need from a more powerful computer. When you look at the vast range of apps available for download today, what you don't see is that they're all running on essentially the same architecture, one that dates back to the earliest days of computing. Even GPUs, DSPs and all kinds of specialised chip types are essentially built for specific tasks while using the same basic principles as envisaged by Von Neumann and others in 1945. You could say that parallelism is a step away from this - and it does address the "Von Neumann Bottleneck", which is that "conventional" computing processes are sequential and not parallel - but GPUs, for example, are lots of more or less conventional compute units all running at the same time.

So what's the next big step in computing? It could be Quantum. The results are mixed right now, but eventually, it will be transformative. However, there's a long way to go. Instead, the next giant leap could be Neural Networks.

Nothing new about those, you might say. Neural Net technology has been around for a long-ish time. But what's changed is the state of the art in the way that we can use those basic building blocks.

For some time, I've wondered whether Text-To-Image models - remarkable though they are - have an even more profound significance. Perhaps if we could somehow make a more generalised model that could compute "something to something", we might have the basis for an even more profound capability.

I'm grateful to Andrej Karpathy (@karpathy), previously Director of AI at Tesla, for a recent Twitter thread in which he said:

"If previous neural nets are special-purpose computers designed for a specific task, GPT is a general-purpose computer, reconfigurable at run-time to run natural language programs. Programs are given in prompts (a kind of inception). GPT runs the program by completing the document."

You can read the rest of the thread here

GPT-3 and accelerating change

I should explain that when he refers to "GPT", he's talking about GPT-3, the Large Language Model (LLM) that's spawning many of the incredible AI applications that are bursting onto the scene like a meteor shower.

He's right. What we're starting to see is that very specific tasks, like Text-To-Image, are turning out to have a lot in common with other applications. For example, text doesn't need to be the only input. It could be sound, images, video, or even thought patterns detected through brain-machine interfaces. (But let's not get too carried away...)

Neural Nets are designed to be analogous to the part of our brain called the Neocortex, which is responsible for much of our higher-level reasoning and cognitive ability. What's remarkable about this brain component is that its makeup is surprisingly consistent despite being responsible for a diverse range of functions. Instead of a heterogeneous collection of task-specific cells, it is almost entirely homogeneous. That would suggest that AI models built around Neural Nets should also be capable of diverse functionality.

If that's the case, then it would turn computing upside down. Instead of pages of convoluted, snarky-looking program code (undoubtedly beautiful in the eye of developers), we could merely prompt our AI-based computer with speech, text, thoughts or even a mixture of all our cognitive outputs.

So far, technological revolutions have generated more jobs than they obliterate. That may be different with AI. But at the very least, there will likely be a new class of developers requiring people skilled with writing prompts in plain language. In future, software companies may need developers with a degree in English Literature, not computer science. (I'm only half joking.)

These are, of course, very early days. But on the AI timeline, we could start to see mature applications by the middle of next year. So don't blink, or you'll miss the biggest leap in computing concepts ever. 

Tags: Technology AI

Comments