<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Beyond the hype: Uncovering the real dangers of AI

Original Hal 9000 prop from 2001:
3 minute read
Original Hal 9000 prop from 2001: Shutterstock

Forget Skynet scenarios; the real dangers of AI that researchers into AGI are uncovering are rather more subtle but no less concerning.

With apologies for yet another AI-themed article, the recent events in California surrounding Sam Altman and OpenAI have shown how dependent our futures might be on a single company or even a single individual. And, alarmingly, very few of us understand the real danger of AI.

In all the excitement (and, frankly, amazement) about Sam Altman’s short-lived departure from Open AI, it’s easy to get tramlined into a “conventional” view of the dangers of AI.

The standard, and I think lazy, account of AI jeopardy for humanity is that it’s either going to make a nuclear bomb, bioweapons, or simply enslave us. We may end up as the equivalent of an earthworm at the controls of an airliner. Any or all of those unsavory eventualities may come to pass, but far be it for me to be so bold; there are two much bigger dangers that no one outside of R&D labs or academia is talking about.

Unintended consequences

The first of these is quite prosaic but extremely likely. It’s the problem of unintended consequences. It’s very easy to explain.

Imagine you’ve built an AI model to look after city planning and day-to-day management. You decide that it’s time to find a solution to the problem of traffic congestion. Within days, the AI tells you it’s solved the issue. At the same time, you notice that the city morgues run out of capacity. The AI’s solution has been to shoot all the motorists. Easy. Problem solved in a flash.

That might seem like a ludicrous example, but it’s deliberate. This kind of thing probably won’t happen as overtly, but might be camouflaged under a much more subtle, multi-layered approach. It’s easier to envisage how it might happen when you change the context to pandemic planning. Trained on the results of previous outbreaks, it’s reasonable to think that AI might have some good suggestions. Each of these ideas will probably seem quite convincingly plausible - after all, AI models are trained on real-world scenarios, often written in formal works that exude confidence and credibility. Which could mean that an AI model could suggest something that’s insanely dangerous but not in a way that’s obvious to anyone, not least because it will be couched in language that reassures at the same time as it deceives.

An intrinsic part of an AI rollout has to be guard rails that prevent unintended consequences. But you can only rely on them if you monitor every step of the AI’s reasoning all the time - because evolving AIs tend to have an Octopus-like ingenuity (itself a macro-effect of unintended consequences) that allows them to extend their intended scope and potentially evade regulation. It might be that for AI to be genuinely innovative, it will have to be complex enough to rule out such a monitoring regime.

A lack of understanding

The second, mind-blowing, problem is, I think, the biggest one of all. Concerns about it might scupper the AI revolution altogether, but I doubt it, given the gung-ho level of unregulated competition between the giants of AI who seem to neither have nor display any sense of restraint.

The problem is that although it’s we who train AI models as they approach the level of AGI (Artificial General Intelligence), it becomes more and more likely that they will evolve in ways that we don’t understand. How could we understand them? They’re building their own “connections”, drawing their own conclusions from them and iterating faster than we can imagine. We would no more understand them than the neuroanatomy of a sentient clam from the planet Tharg; perhaps even less so.  

it’s quite possible that an AI model will come along that's preternaturally good at drawing inferences from limited information. The software might somehow have learned a new skill for which it wasn’t explicitly trained as an “emergent property”. As we get closer to a machine that can apparently “think” for itself, we won’t know how it does it. Nor will we be able to predict its responses. It will be like meeting a ghost. You can’t measure “ghostliness” because you don’t know what it is. Something that is outside the laws of our known physics can’t be measurable because you don’t know what it is that you’re measuring.

If you can build an intelligence that reaches a human level, there’s no reason why it should stop there. Not only will it go beyond that level, but we won’t understand how it does it, nor what it’s “thinking”.

All of which leads to a disturbing conclusion, which is that when it reaches that point - that infinite space that is beyond our comprehension - we probably won’t know it’s happened. Again, how do you measure something that you know nothing about? How do you even know it exists?

And it will likely be accelerating. By the time we realise it, it could be thousands of times more capable than our own limited brains.

A dubious track record

We won’t be completely helpless. There will be tangible artifacts of AGI that we can detect. But not explanations. We will have to reflect on Arthur C Clarke’s prescient prediction that “if a technology is sufficiently advanced, it will look like magic to us”.

I’m a massive fan of AI. It’s exciting, and it is the best chance we have to solve the world’s major problems quickly. But you could have said that about technology in general and the way it has evolved over the last century to make it look like we’re living today in a science fiction novel.

But look how that turned out. It’s not exactly a utopia, is it?

Tags: Technology AI

Comments