Driverless cars are one thing, but the implications of recent work in advanced artificial intelligence points to a lack of human involvement in other areas too. Is anywhere really safe?
There's a growing sense that artificial intelligence (AI) is going to be important. In fact, if you read the newspapers or frequent the tech sites, you're likely to see at least a couple of AI related stories every week.
It's important to understand what artificial intelligence is, what it can do and what it can't do. Ultimately, we need to know how it's going to affect our ability to work with film, TV and video.
There are two types of artificial intelligence: Narrow and General. Narrow AI works in a single, specialised field and can make quite credible decisions based on what it's been taught. Outside of that, it's about as clever as a peanut.
General AI is a much bigger idea. It's intelligence that's able to learn and make judgements about any sent of circumstances that it might find itself in.
Now, before I go on, it's important to say that there are some very deep and philosophical concepts that need to be grappled with for a realistic understanding of the consequences of full General AI. This isn't the article for those discussions, but just bear in mind that, until these issues are dealt with, we are guilty of begging a lot of questions about, for example, the Self, consciousness, volition and determinism. And that's just for starters, because full General AI that's either at human brain level or (inevitably) beyond it, creates some complex ethical problems that we're barely equipped to even ponder.
Narrow AI is already all around us. It's in our phones and it even answers them when we call the electricity company or our bank.
These systems aren't really intelligent, because all they are doing is recognising words and following a predetermined set of rules for responding to them. But the better they get, the more intelligent they seem. Ultimately, we will have to ask ourselves whether a machine that seems very intelligent really is intelligent. It may well all come down to a matter of definition.
We're all used now to Siri, Cortana and Alexa. Sometimes they can be remarkably effective, but some of the newer generation of personal assistants (not released yet) can go much further. For example, you can ask them "is it going to rain on the third day of the longest summer month in the country with the seventh-largest population in the content south of the USA?" While this is impressive, it is not really vastly intelligent. It's just better at parsing long sentences than we are which, in itself, does show that the basics of general intelligence are being conquered one-by-one, but we're certainly not there yet.
I'm pretty sure that some sort of voice-activated personal assistant would be useful for camera operators. For me, it could be a very good alternative to scrabbling my way through a typically labyrinthine camera menu just to change the frame-rate. But as we know, these things are not perfect.
What I have found, though, is that voice assistants can become much more accurate than average if you use them enough to learn their idiosyncrasies. Even with something as simple as asking for the weather, I've found that Siri makes mistakes if I ask her one way and almost always gets it right if I ask her another.
So what about the prospect of cameras that can think for themselves? In a sense, they have been doing that ever since the Canon AE1 Program SLR, introduced in 1981. Today, with a camera in full automatic mode, it's like having a little expert sitting inside your camera. But whatever the step is that's beyond that, I don't know when or if it will happen at all. In fact, I suspect we'll have driverless cars before we have driverless cameras. Because the whole point about taking a picture is the artistic intent. And, at least for now, that can only come from one of us.
Graphic by Shutterstock