<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

How to embrace AI while saving the world

2 minute read

ShutterstockLet's get the AI basics right before we see these gentlemen knocking on our doors...

Could armageddon be prevented by making sure robots can do the house cleaning effectively? It's not as silly a question as you might think!

Thankfully, not everything about artificial intelligence requires us to have five degrees in advanced mathematics even to discuss the subject. Hollywood has already told us all about the potential for what we might cautiously call safety problems with AI, from 2001 to the Terminator movies, inasmuch as a global thermonuclear carpet-bombing is something of a safety problem. The question is whether there is any realistic potential for AI to actually have this sort of problem.

Well, that's a complicated question, but let's put it this way: let's move the nuclear button out of reach of our newly-constructed, artificially intelligent robots, eh? The problem with the safety of any AI is fundamentally about its sense of perspective. It's theoretically quite possible to create a robot to clean the house using AI techniques — even plausible — with technology we can reasonably anticipate. What's more difficult is to have the robot understand that it's a good idea not to douse the kitchen floor with bleach solution if there's a child crawling around on it.

Goal based learning

We train AI systems to do things like recognise characters and clean houses with goal-based learning — a “utility function” in the jargon. If the utility function rewards the robot for cleaning house, it'll clean house and if we don't give it any other goal it will do that to the exclusion of all other concerns — and like any passive-aggressive computer system, it will quite literally do that to the exclusion of absolutely all other concerns. It won't step over a sleeping cat. It won't stop if the house is on fire. Should a virus end the existence of all humanity overnight, come back in a million years and if the robot's still working, it'll still be cleaning the house.

Unfortunately, there are no straightforward solutions to this. The most obvious one is to fit the robot with an emergency stop button, something that existing robots already have. The problem is that if the robot has lots and lots of intelligence, it will know about the emergency stop button and it will know that having the button pressed will stop it from cleaning the house. So, it'll do everything it can to avoid the button being pressed, which might actually end up being even more unsafe. Nuke humanity, after all, and nobody's around to press the button. We could conceivably alter the robot's programming such that it doesn't mind about having the stop button pressed, but if having the button pressed is just as good as cleaning the house, then the robot is likely to press its own stop button. It's a faster way to completing the job, after all.

Is there a simple solution?

The more we think about this, the more it becomes clear that there isn't really a good, simple solution to the problem. With current AI systems, we're working hard enough to get them to do the job we want to be done, let alone have them understand about their own emergency stop buttons and how to get the nuclear launch codes. Once we do create AI systems complex enough to have these problems, though, we'll need a solution. Different approaches to this have been proposed, but they're mainly quite complicated because they often rely on the AI having some very human characteristics. Things like a sense of proportion and reasonable behaviour are much more complicated and difficult to create in an AI than the comparatively simple task of cleaning house. After all, humans themselves regularly disagree on what's proportionate and reasonable behaviour, let alone programming a computer to figure it out.

Title image courtesy of Shutterstock.

Tags: Technology

Comments