<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

How driverless cars will lead to better cameras

Here in my car: will the ability of this to avoid killing pedestrians lead to better camera tech?
4 minute read
Here in my car: will the ability of this to avoid killing pedestrians lead to better camera tech?

Replay: Before very long, and certainly by the year 2025, cars will essentially be computers on wheels, and the self driving cars that will dominate the roads and the cameras of the future will have a lot in common. 

The automobile of the future will be simpler because it will be electric; battery-powered cars have far fewer moving parts. But it will be vastly more complicated as well, as cars become loaded with sensors and computers dedicated to the task of autonomous driving.

I’ve always felt that so-called car computers have been rather primitive: calculating an average MPG doesn’t need a supercomputer. Even engine management systems aren’t really all that powerful in absolute terms. They don’t need to be: they have one job and they do it well enough. That’s all they need to do. 

But autonomous driving calls for orders of magnitude more computing power. This is not surprising when you consider the complexity of the task, and its critical nature. 

I certainly wouldn’t want to trust my life to the autocorrect on my phone. So why would it be okay to put your life — and your family’s — in the hands of a self-piloting car?

Quite obviously the computing environment in an autonomous car will have been designed from the ground up to put safety first. Man-decades will have been spent inventing and testing safety systems. Ultimately it will be statistics that convince you. I think it was Elon Musk of Tesla that said that as soon as it is proved that “autopilot” is ten times safer than a human driver, it would be released as a system that doesn’t need a human overseer. In fact, at that point — or perhaps before it — you could argue that it’s safer to let the car do the driving. I’m pretty sure that “manual” drivers are going to have to pay a higher insurance premium. 

It’s not hard to see why a fully autonomous car is going to need a lot more computing power than traditional vehicles. But the reality is that they’re going to need even more “compute” (as it’s fashionably called these days) than you might ever have imagined. 

Because driverless cars will have an incredible heavy responsibility to bear.

They will be accountable for the safety of their passengers, occupants of other vehicles on the road, cyclists and pedestrians too. The only way they will be able to do this is if they — in at least some sense — “understand” the world. 

This is where it gets really interesting. Understanding the world is not something cars have been traditionally good at. *

In order to know about their surroundings, cars need sensors. And mostly, autonomous cars will be festooned with them: RADAR, LIDAR, Ultrasound and, of course, cameras. When you collate all this data coming in, what you have is, well, just data coming in. There has to be some element of the whole set-up that is able to assemble a world-view out of all of this.

At this stage we don’t know exactly what form this uber-sensory processor will take (it’s a little bit like the effort to pin down the essence and origin of consciousness - one of the hardest problems for science today) but at the very least there will have to be some powerful computing and the ability to, in some sense, “recognise” objects. This might be in its simplest form the ability to distinguish between static and moving objects. Beyond that I would guess it would be nice if it could tell the difference between a living object and an inert one. This is not just because it would matter more if the car collided with a living thing, but also because once you know you have a conscious, sentient object in the headlights, you can start thinking about what the intentions of that blobby mass of matter might be. 

Intentionality is not an easy aspect of humans or animals to predict correctly. We find it hard enough between ourselves, and autistic people tend to specifically lack this ability. 

But while it’s probably asking too much at this stage for an approaching car to read a pedestrian’s facial expressions or micro-body language, it should be relatively easy to interpret macro-behaviour. For example, if someone’s walking towards a pedestrian crossing, then it’s likely that they’re going to expect your car to stop, depending on distances, speed etc. 

Beyond this, I think we can anticipate cars “knowing” more and more about the real world. They will be able to categorise an increasingly diverse range of objects and behaviours. 

Eventually, they could come as aware as us about the things that matter that are going on around them.

At the very least, this will result in a database of objects and behaviours that is either stored locally or in the cloud, or both, to some extent. And because of the ability to share this data, it will grow incredibly fast. Eventually there will be as much, or more, data about the world than there is in the world. (An object may contain a certain amount of data - but it won’t necessarily contain data about its relationship to other objects). 

Well before this point we will be able to use this data for other purposes. In cameras, for example.

We’re already starting to see cameras (like the new Panasonic GH5 and probably many more - this isn’t the sort of information that manufacturers readily talk about) where noise performance and sharpness are enhanced by an ability to recognise properties of objects in the image. For example, a blue sky is not typically peppered with random gaussian noise. Nor is the edge of a girder normally soft and fuzzy. It’s easy enough to correct for these aberrations when we know they’re not supposed to be there.

The better we get at building machines that understand objects in the world - and the world itself, the better our cameras will become. And very possibly in ways that we can’t imagine yet.

*There is also a really interesting extrapolation of the trolley problem that fits in here. This is an ethical thought-experiment that asks whether you would pull a lever to change the points (switches in the US) and prevent a runaway trolleycar hitting a large group of people in the certain knowledge that it would divert onto a siding and kill one person. Effectively you have become a life-saver and a murderer at the same time. As the authors of the fascinating paper The social dilemma of autonomous vehicles published in Science in 2016 point out, autonomous vehicles should reduce traffic accidents, but they will sometimes have to choose between two evils, such as running over pedestrians or sacrificing themselves and their passenger to save the pedestrians. This could keep lawyers busy for decades. Unsurprisingly, they also found that while people are all in favour of this feature in AVs, it doesn't mean they want to drive one themselves. - AS

Tags: Technology

Comments