<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Is AI going to get us all killed?

5 minute read

Replay: AI is on its way. It’s a big deal and no-one is quite sure of the consequences. How is it going to affect our lives in the film and video-making business?

Artificial intelligence is here, now. You can go out and buy video and audio products that use it already. Does this mean that a machine wants your job? Or even that your NLE will eventually want to kill you? 

That may sound melodramatic, but one RedShark reader has just said: 

"I do not want machine learning or AI in my editors. Or, frankly, anywhere. I am a human being, and I am not willing to be replaced by a machine. These automation companies are killing jobs. They are undermining talent. They are evil."

So there are obviously some real concerns. 

And that’s understandable. The concepts around AI are hard enough to understand in the first place: never mind the virtual impossibility of predicting what’s going to happen in the future. Even for experts.

Before we get down to specifics for the media industries, I just want to say that I’m not sure that a machine will ever be able to “want” to do anything. To do so, a machine would have to be conscious and sentient, at least in some sense. 

We tend to be a little bit careless in our use of language. We say things like “stalagmites want to grow upwards” or “water wants to flow downhill”. You can say this about anything with a built-in tendency, but that’s different to saying “my son wants to be an architect”, or “John wants to marry Felicity”. 

So the idea of a machine “wanting” to kill us, or even “wanting” to take our jobs away is far-fetched - at least until we have machines that are genuinely sentient, and it’s hard even to say what that means. Perhaps it means that they are so human-like that humans can’t tell that they’re not human. At which point, I suppose, anything could happen.

The human factor

What’s far more likely, and far more dangerous in my view, is unintended consequences. And ironically this danger arises primarily because of humans. 

The danger is precisely this: when we give instructions to a machine, we don’t “think” like a machine; we think like humans. Humans have common sense. Maybe machines will have common sense too one day, but until they do, the danger exists. 

Let’s take a simple example: suppose we tell an AI that it has to figure out a way to decrease traffic congestion in a city centre. The result: traffic moves 30% faster. That’s because the computer removed all the pedestrian crossings. The only flaw in this solution is a massive increase in pedestrians being hit by cars. 

The error in that methodology is thankfully obvious. It would never be allowed to happen. But the more autonomous control you give to an AI system, the bigger the danger that such flaws would not be discovered until it's too late. 

You could say that this type of thing is already happening - although it's harder to ascribe such consequences entirely to AI. We've all seen the type of cascading failure that happens when an IT system goes wrong in some obscure locality and brings down a worldwide booking system. This happened recently and the cumulative effects brought down an entire airline. I don't think that was because of AI, though. It's much more likely to be the result of bad design in their IT system, which meant it wasn't resilient enough to survive the sequence of events. 

It's possible that we've already seen this happening in financial markets. Algorithmic  trading has been around for a long time, and because computers can take decisions very quickly, it's sometimes hard to stop the consequences of a bad decision until it's too late and the whole market has been upset (causing a run on a particular stock or currency, for example). 

So far most examples of this have not been as a result of AI, but this industry sector is notoriously secretive and regulations are especially bad at controlling emerging technologies, so most people have absolutely no idea whether AI is in use in critical situations in the finance industry or not. It is entirely conceivable that a "rogue" AI could bankrupt a company or even a nation. 

(I use the term "rogue AI" here for the sake of clarity. I'm aware that it is an emotive term, but let's live with that for now).

So, machines are not going to want to kill us, and the biggest danger is unintended consequences of instructions given by humans to machines that lack common sense. 

How, then, is AI really going to affect our jobs in the content creation business? 

Extreme cases

Let's start by looking at an extreme case. 

Imagine a world where you write a film script and feed it into a computer, which then outputs a 16K resolution video file representing the finished production - complete with actors, locations, special effects and audio: all glossily finished and completely convincing to an audience. (Imagine the render times!)

I don't think this scenario is impossible. Nor do I think it's necessarily a long way away. But there's no way of telling. There are some developments in AI that I think make this eminently possible, but it will take thousands, millions, billions - or even trillions - of times more computing power than we have available to us today. Does that mean it's a million years away. No. I'd give it a maximum of 50, 

If you think that's unlikely - just look at what we've achieved in the past 50 years. We've gone from slide-rules to supercomputers in our phones. There is simply no way to know what's going to happen even in the next 10 years, never mind the next 50. New technology is always the launching ground for newer technology.

The iPhone X is 300 times more powerful (at least) than the first iPhone. At that rate (which is actually likely to increase) the iPhone 20, in merely another ten years, will be 90,000 times more powerful than iPhone number one. In 20 years it will be 27,000,000 (27M) times more powerful. You get the idea. 

Put AI into the equation and it will make Moore's Law look like predicting the weather with a piece of seaweed. 

So, given all that uncertainty, the only thing we can do is look at what we have now, and what might happen in the immediate future. 

Machine Learning

Today, we have the first practical uses of AI, most of which have come about through Machine Learning (ML). 

ML is what happens when you show a type of circuit called a Neural Network a big set of information, and it "learns" patterns. This sounds very abstract, but here's a nice example. Show a ML system all the paintings by Rembrandt, and it may be able to learn the great painter's style. You can then feed it photographs, and it will re-render them in the style of the Dutch oil painter. Sounds like magic, but you can already download iPhone apps that do exactly this. 

Other ML systems have learned enough about the world to be able to create visualizations for films: show the system where you'd like houses, lamp-posts, cars, pedestrian crossings etc, and it will produce an (admittedly rather blurry) scene to your specification. 

At the point when these "scenes" are photo-realistic and high resolution, we will be asking all sorts of questions about machines and creativity; perhaps even long before that. 

Meanwhile, asset management systems are using AI (specifically, again, Machine Learning) to enable searches using objects and descriptions rather than specific metadata. 

Maybe not a sci-fi as other potential AI applications but immensely useful nevertheless. 

There's no doubt that AI is coming. There's also no question that we need to be ready for it. For me, the biggest question of all is: who controls it. If AI controls AI, then, yes, we're probably all doomed. But forewarned and prepared, this needn't happen. 

So, if AI remains in the control of humans, does that mean we're all going to be OK?

It depends which humans.

Title image courtesy of Shutterstock.

Tags: Technology

Comments