<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Are we heading for a new Luddite revolution?

5 minute read

ShutterstockThere's something comforting about an old, mechanical typewriter...

If you thought technology was accelerating fast now, wait till you see the nitrous boost that AI is going to give to the world. All we have to do is survive the G-forces.

It’s never been harder to predict anything than today. Not only is the exponential trajectory of technology making us feel like we’re living in a science fiction novel, but we’re still wide-eyed and somewhat stupefied from the Trump/Brexit/ <insert authoritarian populist election winner here> revolution that’s seemingly ongoing. 

All of this will seem like small fry compared to what’s coming up in the next few years as Artificial Intelligence (AI) becomes a dominant factor in our lives. 

And it’s no longer a question of whether this will or will not happen. It is happening, and we’re in the foothills of the biggest revolution to hit work and leisure since time began. 

In a way, it’s a privilege to be alive while this is happening. We’re going to see more progress in the next twenty years than in the whole of history, as long as we don’t end up killing each other or destroying the planet, either of which unpleasant outcomes would not entirely surprise me. 

But I remain optimistic that we will at least exist in twenty years time, and that the world might be a better place because of technology - and, perhaps, because of us, too. 

The world is much less stable than it used to be. Many of the old status quos have gone and where Western politics used to be based around a simple Right/Left axis, there are more dimensions now than there used to be, and the opinion pollsters - largely oblivious to this - are licking their wounds. 

We are used to working within known boundaries. All of this changed in 2008 when automatic trading programs that were effective within strictly defined limits found that those familiar extremities were being exceeded by orders of magnitude. That’s happening all over the place now, both politically and in the field of technology. 

I was in New York City on the night that the all the meters hit the end stop and carried on going. Pollsters were looking in the wrong direction and weren’t even measuring the factors that caused the election result. “Working Class” doesn’t mean what it used to mean. 

We need to get used to this. Technology was in the background in the big financial crisis. It’s what fuelled some of the issues in the recent election (and “Brexit”). In the next few years technology will move to the forefront and we will never have seen anything like it. 

What’s behind this is artificial intelligence. Look at any of the big social media, search and software companies and what you will find is that they are betting their future on AI. The hardware industry, too, heading that way too: Intel has mentioned the possibility that it will put neural networks on its CPUs. 

A further accelerant

Let's be clear about this. Technology is accelerating anyway, without the help of AI. The simple fact that today’s generation of tools builds the next generation means that Technology is getting better, and if we need more computing power to design the next generation of… computing power, then that’s what’s going to happen. 

Meanwhile, quite aside from Moore’s Law, algorithms are getting better. Increased bandwidth means that cloud and distributed processing are becoming useful. All of this already means that technology is changing faster than ever, and what happens when you add AI to the mix is anyone’s guess. 

We’re at a curious and important stage in the development of AI. We’re certainly not on the verge of creating a general intelligence that is as powerful as a human brain. But as recently as last week we have seen evidence that some of the basic technology that you need for general human-level AI has been achieved. Machines are starting to work things out for themselves. They’re beginning to acquire complex skills without being taught. 

The key to this process is a so-called Neural Net. This is an electronic version of a key functional aspect of the brain. When a neural net is exposed to stimuli, it sets weightings in its internal pathways that correspond to the frequency with which it has been exposed to certain patterns. It can then “recognise” similar patters. Most importantly, it can “learn” new ones - without being explicitly taught. This ability is now being emulated in digital electronics, and it is absolutely fundamental to “proper” artificial intelligence. 


Did someone say Skynet?

But there’s a significant downside to this. At this point, you’d normally expect me to start talking about machines wanting to take over the world. Well, it is yet to be determined whether we even know what we mean by a machine “wanting” to do anything. No, this is something different. 

It’s that when a machine learns something - when it “teaches” itself - we will rarely be in a position to know how it has done it. Which means that machines will start to behave in a non-determinisitic way. We won’t be able to predict their actions. This, do me, is far more dangerous than artificial megalomania.

To be able to depend on machines that are self-taught, we’re going to have to do it statistically. This is already out in the field. Self-driving cars will be amongst the earliest and certainly the most beneficial (or disruptive if you’re a delivery driver) AI technologies to be widely adopted. If it arrives late, it won’t be because of the technology. It will be because of the law. Thank about it. On the one hand we’re saying that these cars learned to drive all by themselves. On the other, you’re expecting my life to depend on that? How’s that going to work? 

It will work the only way possible: by proving that self-driving cars are statistically safer than human driven vehicles. And it’s surprisingly easy to do this. Just set up a vehicle that’s capable of autonomous driving and let me, or any other “normal” driver drive it over an extended period. Do this with thousands of people. There will be accidents, many caused by human error.  Each time there’s an accident, compare what the human driver did at the time with what the AI driver would have done. At the point where the AI driver would have caused fewer, or have avoided more accidents by taking different decisions, then you have proof that it’s safer to let the machine take the strain. Wait until the AI avoids ten times the number of accidents, and it becomes irresponsible to let humans go anywhere near the driving seat. 

There is so much more to expect from a world infused with AI. How we respond to it depends on who’s in charge and what legal frameworks exist. It has the potential to be incredibly wonderful or incredibly disruptive and dangerous. What could happen is that machines take care of all the dull, difficult and dangerous stuff and we spend our time looking after the less fortunate and needy, or we could all end up as virtual prisoners and slaves. If you think the world is unstable now, you haven’t seen anything yet. 

Alternatively, we could pass laws restricting the development of AI. We might take the view that - like the atom bomb - we’re in danger of creating something that could destroy us. Some people would want to ban it. Even great scientific minds warn us to be cautious of it. There could even be a “Luddite revolution"

My feeling is that we should progress with it cautiously and with all necessary safeguards. We can expect there to be benefits. We actually need there to be if we’re going to see out this century. 

Right now, though, there are no controls, no overseeing authority, and no brakes or damping mechanisms. There’s no way to monitor the state of AI, and the biggest reason for that is either military secrecy or commercial confidentiality.

At some point we may wake up and realise that we’re in a runaway train.

What could possibly go wrong?

 

Graphic: shutterstock.com

Tags: Technology

Comments