Do you remember when you first found "sharpening" in your photo editing tools? I do. Along with "saturation" it enabled me to make all my photos snap and pop in a way that was unreasonably satisfying then. Looking at them now they have the same hallucinogenic presence to them that we would go to great lengths to avoid today.
But nevertheless, sharpening has a place, used moderately. It's a simple process, analogous to a high frequency boost in the audio domain.
And, just as with audio, when you add gain to the treble, you also boost stuff you don't want to boost. Background noise, sibilance, high frequency buzzing.
But there's no way round this. It's a simple, essentially dumb process.
With Photoshop, you can weave all manner of deceptions if you want to (as well as creating and enhancing wonderful images, honestly and openly).
I wouldn't be averse to a bit of help from Photoshop if I had to sit for a portrait.
Nor would I question the integrity of a photographer who had graded his or her raw footage and added a bit of sharpness and saturation for "punch".
But what would I think about a news photographer removing objects - buildings, trees, groups of children - from a picture to make it look more menacing. I'd think the same about them as I would about anyone acting deceptively.
This has been a problem for some time for news organisations. With so much competition to get your shots chosen, there's every motive for bending the rules slightly if it gets your work noticed.
No one knows how many times we've been deceived. I've never met a photographer who practiced this type of deception. I think most are honest in their work.
But there will always be rogue practitioners. Up to now, they've only been caught when the original shot comes to light and is compared with their "enhanced" version. Once that happens, there's no going back.
All of this is about to get a lot more serious - because of AI.
Now, I'm a fan of AI. I solidly believe it's going to make our lives better. But we have to use it responsibly and make sure others do too.
We're at a tipping point in photography and videomaking. It's happened virtually overnight, and this is just the start of it. Like all new technologies, it will bring good things. It's already brought bad things.
It's the ability to replace existing, real, faces, with other people, convincingly, and with nothing more than a desktop computer.
This ability is here, now. You can Google it, find a link to YouTube tutorial videos, and you're off - making fake scenes as good as the best of Hollywood, at a minuscule fraction of the cost, and in a fraction of the time.
This changes everything. And as you'd probably already guessed, what's behind all this is AI. Artificial Intelligence. Not the type that's going to build a robotic army and make us all slaves to the machine-god, but in its own way, just as powerful, unstoppable, and even menacing.
Let's just be clear about this. For years, it's been possible to replace heads and faces with other actors. But until now it's been a laborious process that needed experts and expensive equipment. To do it really well, it still needs that. But now, to do it almost as well, it just takes a PC that's good for gaming, and the ability to follow a YouTube tutorial.
Five years ago, we saw this ad featuring the late Audrey Hepburn. Curtesy of Framestore, it's a beautiful piece, and even today it's so good that it's still being shown on network TV.
A few weeks ago, we saw this: made by a Reddit user called Derpfakes.
All of this process sprang out of the nasty practice of taking a public (or not so public) personality's face and transplanting it onto available adult footage using software that makes use of machine learning. It's a nasty practice because it has the potential to be used for bullying and blackmail.
Derpfakes' example is a perfectly innocent technology demonstration.
Leaving aside the nefarious potential for this technology, it's easy to see how it could be transformative and subversive, because if a hobbyist is able to make a Princess Leia look virtually as good as the best that Hollywood could manage, then what could Hollywood have done with this tool?
And then, what could anyone with ulterior motives have done.
Authenticity for a different world
Putting aside - but not forgetting - this "doom and gloom" scenario, I can't help finding these new techniques anything other than exciting, primarily because they're an up-to-the-minute example of technology evolving at lightning speed.
That excitement is tempered by the stark reality that, because of all this, we now live in a different world: one where we can't treat moving images as evidence. We will have to invoke new layers of skepticism if we're going to survive.
That may sound rather dramatic, so to justify what I'm saying here, let's have a closer look at the implications.
First of all, I think we're going to have to have a new way to measure the properties of a photo or a moving image. This new parameter will be Authenticity.
This is going to be important not just in the case of malicious or deliberate attempts to deceive: it's likely that AI and ML will play an increasing part in image capture and reproduction, not just in exotic circumstances, but every day, in every camera.
If that sounds surprising, the reason this is going to be so widespread is because it makes it much easier to take a great photo.
As an artist, you might feel that you don't want the camera to help you to take good photographs, and that's understandable. You certainly don't want a machine to deflect your own artistic intent.
But which camera operator has never used auto focus, automatic exposure, and all manner of picture modes that pop up from time to time?
You have to go back to mechanical (ie non-electronic) film cameras to avoid any sort of automation. Most people feel that we've moved on from then.
So, yes, there have been elements of automation in cameracraft for a long time.
And then there’s the pleasure and - if you accept it as such - the creative value of using new techniques - be they purely AI or not - for artistic reasons.
Here are two photos I’ve taken of people I know recently, with my iPhone. The setting I’ve used is “Portrait; Monochrome”. I have an iPhone 8, which has two cameras and these feed enough resolution and depth data to allow an algorithm (which we're told by Apple uses AI) to create a studio quality portrait based, we assume, on the chip in some sense "knowing" what a good studio portrait should look like.
We can see this kind of thing at work in this clip from an Nvidia presentation from last year.
Ray tracing is extremely computationally intensive. It needs a lot of computing power. So much so that it often takes ages for images to render. Long before the render is finished, we can normally see enough to guess at what the objects being rendered are - and often in some detail.
Now an AI can do this too. In this clip, you can see the ray-traced render get to a certain point. Then the AI says "I know what this is - it's a reflection of a tree on some shiny paint". And then instead of waiting for the render to complete, the AI finishes the scene to a high quality basked on what it has concluded it knows it is going to be.
Think about this.
It means that as long as a scene can look like something recognisable (to a computer), then no more detail is needed. The computer can finish it off.
This is something that comes close to being miraculous. We're brought up to think that you can't add real quality to an image without additional data, and there's no additional data (about that specific image) involved here.
But there certainly has been additional data that's been used to "teach" the AI what looks right and what doesn't.
As a technique I see absolutely nothing wrong with this. It will speed things up, and may even lead to new artistic techniques.
But you also have to ask: "what happens when it goes wrong?", and "what is the likelihood that someone will misuse this?"
This will happen. Sometimes it will be for criminal reasons. Sometimes it will be used as a way to sell photographs to newspapers.
In some ways this is no different to other photographic enhancement techniques. But in others, it's very different. Think about it like this.
In conventional photography, parts of the picture that aren't perfect reproductions are called artifacts. These can range from being out of focus to noise and often the undesirable effects of compression.
There will be artifacts with AI-moderated pictures too. But instead of noise or a blocky compression artifact, instead of someone's head, you might get a teapot.
What's the answer?
It's too early to say, except that we definitely do need to keep our eye on this as it develops.
I'm no expert on this (is anyone?) but I would suggest a new file format - that content buyers could insist on - that includes the original image in addition to the enhanced one. You could even insist that it would include a mathematical hash of the differences between the two.
I'm sure there would be plenty of ways round this but we have to start somewhere. I'm the last person to want to discourage good scientific progress. But we can't risk losing our sense of reality.