<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

With ChatGPT's public preview AI has just taken a huge step forward

No, we're not at this point just yet. Pic:
4 minute read
No, we're not at this point just yet. Pic: Shutterstock

With the introduction of Open AI's dialogue-based AI chat interface, ChatGPT, the project moves forward to models based on GPT 3.5 and takes a notable step forward.

Remember Thursday, 1st December 2022. It was the first day of a new era.

It was a day notable at first for its utter normality. You'd expect damp, foggy November weather in a northern town in the UK. You'd expect to hear that inflation is high and that we're all going to struggle with our energy bills. And you'd expect to see another AI model to be talked about on Twitter. But this time, it felt different.

I follow a lot of AI gurus and commentators (they're not mutually exclusive!) on Twitter, but even so, my stream is pretty diverse. I'd expect about one tweet in ten to be about AI. But today, about one tweet in ten wasn't about AI. Something was going on, and it looked like the change in the pattern of tweets meant something significant.

Open AI intros ChatGPT

They were all talking about ChatGPT, an offshoot of the Open AI organisation, just released in the form of a public preview. "Chat" is just that: you ask it questions, you ask it to do stuff, and you literally chat with it. For the first time in my online life, I realised that not even the most breathless clickbait headlines could do this justice. "What happened next will shock you!" doesn't even cover half of it.

Quite simply, ChatGPT has taken an already established AI model to new peaks, except that we probably have yet to reach the peak, if, indeed, we ever do. But on the way to the top, it's clear that the top of this particular AI mountain is way taller than we expected it to be. Somehow, with only a few words of instruction, you can get this AI program to write convincingly on any subject. Want a description of a mundane household fridge in the style of David Attenborough? No problem: "Deep within the bustling kitchen, there lies a majestic and mysterious appliance: the fridge. This sleek, metal-clad beast is a wonder of modern technology, capable of preserving our food for days, weeks, even months on end." Want the instructions for using a DSLR in the Scots language: that's easy: "Keep practisin an experimentin tae learn more aboot uisin yer DSLR camera tae its full potential. Enjoy!" Ask it to critique some computer code, and it will do it in easy-to-understand language that's precise and yet accessible.

We even wrote an entire feature using it. 

It makes Siri look like Baird's mechanical TV in an age of 4K. And it makes Google's future look questionable, to say the least. After all, why would anyone want a terse set of approximate results possibly skewed by advertising revenue when you could have an academically researched response that's easy to read and factual?

Surely Google, Apple, Facebook, Microsoft etc., will harness this technology and monetise it? Of course they will. But will they be able to do it fast enough? This thing can learn; it can correct itself. It can improve itself. And it may outpace the profit-driven research of the tech giants.

Self-improvement

What does that mean, specifically? AI models like this are good at what they do because they are trained on massive datasets. As the models get larger (with more "parameters" and able to handle more "tokens"), they become more capable. They also get more effective with better quality datasets. But we're starting to see that they can also improve themselves by looking back at their own responses and using them to improve their answers in the future. It's all staggering.

But one thing is more staggering than all the other staggering things. ChatGPT can write code.

As you'd expect, it could be better, but that's probably just a matter of training. So we are faced with the prospect of a computer program that's edging towards the definition of being truly intelligent that can improve its own source code. That could lead to a virtuous or vicious circle, depending on your perspective, or specifically a threat to your livelihood.

Of course, politicians, lawmakers and leaders are largely unaware of this development. They won't be for long, but how well will they understand it and its consequences? And this is just the start. There will be new versions and new applications based around it almost every week.

Ten years ago, you could make a reasonable guess about technology, perhaps a decade in the future. Five years ago, that shrank to five years. Then it became two years, then one, then six months, then a month, and then a week. So by this time next year, the future of humanity will probably look different in the afternoon than it did in the morning.

Coping with change

How do we deal with it? First, the good news: we can mitigate the effects it will have on employment while enjoying the benefits. But this is just a feeling I have now, which may be rendered out of date with absolutely zero notice. The way I would suggest we cope with it - in very general terms - is to take a meta-perspective. Step outside of the current situation and ask: how can we still influence what's happening? More specifically: where will it still need human input?

As long as we don't lose - through ignorance or carelessness - control over the AI we have created, we will always be able to influence, guide it, instruct it and train it. That's the key. It's how we manage it. I have doubts about our ability to do that, based on all the other things we have effortlessly messed up on this planet. But if we succeed in aligning the aims and outcomes of AI with our own innate kindness, benevolence and altruism, then we could be truly at the dawn of a new golden age.

That might sound idealistic, and it absolutely is. But to be able to navigate, you have to know where you're going. And that's not a bad place to aim for. 

Read on to see what ChatGPT is capable of.

Tags: Technology AI

Comments