<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Time to cloak your work to stop AI ripping it off?

Fighting robots:
3 minute read
Fighting robots: Shutterstock

AI was always going to be used against AI to help defeat, well, AI and the University of Chicago’s Glaze is just an opening salvo in a war that could be fought at breakneck speed. Phil Rhodes charts some future history.

A lot of sci-fi movies have depicted combat between artificial intelligences, but the word “war” seems a little overwrought considering there aren’t actually bullets flying yet. Still, recent legal action has already seen the Midjourney and Stable Diffusion image generators as well as the CoPilot code generation tool accused of infringing copyright on the assumption their output is based on the work of other people – which inevitably it is, in at least some sense – creating a situation in which there’s understandably some combative language being thrown around by artists whose work is being at best observed by computers.

If actions like that define the opening salvo, what we’re about to discuss is the return of fire: Glaze is an image processing system put together by people at the University of Chicago. It’s intended to make trivial, ideally unobjectionable changes to art which are intended to frustrate AI attempts to learn from it, or to duplicate its style. Since style per se is not something we can protect with copyright in most jurisdictions, the legal situation here is fuzzy; the moral situation is far fuzzier, but the idea of foxing an AI out of learning how to duplicate one’s work is something that might well appeal to people who make a living with a paintbrush or pencil, regardless the legalities.

The anti-AI cloak

Going by the team’s own demos, Glaze works quite well. The team calls its technique cloaking, inviting unflattering comparisons to Star Trek bad guys. The trick is quite subtle; the idea is apparently to make certain subtle fundamentals look like someone else’s work, so that asking for an image in the style of artist A actually gets us an image in the style of artist B. If that prompts questions about whether using Glaze to prevent an AI learning from images relies on a tool which  is an AI which learned from images, well… you get the picture

Looking at the demos, what we actually seem to end up with is one of those slightly eldritch images which spooky children draw in horror movies in order to emphasise their spookiness to the audience. It’s not the first time an AI has produced something that makes any sane observer lunge for the delete key. Still, trying to train an AI on Glaze-cloaked images does risk creating some really monstrous hybrids along the lines of Constable-does-Picasso by way of Dali’s melting clocks. That isn’t great if you apply it to a text prompt such as “cat eating a fish;” don’t blame Glaze if what you end up with is an R-rated image of a fish eating a cat. Having such things appear behind our eyelids at 3am is the price of attempting to subvert a legitimate security measure, it seems.

There are two obvious questions. The first is whether the cloaking process makes objectionable changes to the image. In practice, it does make visible changes; whether those changes are sufficient to annoy an artist is really up to that artist. The second is whether it actually prevents an AI learning from it, and as we’ve seen, it does – for now. The problem with AI is that, by design, it learns, and there’s nothing a crafty software engineer relishes more than the opportunity to score some sort of victory over a fellow code-basher.

Measure versus countermeasure

In the end, this is likely to be another battle of measure versus countermeasure, much as it has been for more mundane encryption and protection mechanisms for decades. It’s hard not to look back at the scrambling systems used on things like DVD and Blu-ray, or the protection applied to games or productivity software, and how successful they were at preventing people copying data. Or, more to the point, how successful they generally weren’t. It’s not necessarily obvious that the involvement of AI really changes the fundamental considerations in that sense.

Some of the claims recently made seem suspicious. It’s a stretch (a huge stretch) to consider what an image-generating AI does to be “collage,” as has been suggested, and no matter how much people use the word “copy” in their writing, claims that a trained AI contains copies of images are at the very least debatable. Even so, the whole situation does raise some difficult moral questions about how much of the essence of an artist’s work is really contained in an AI which has been trained on that work, and what rights people have, or should have, over that essence. 

After all, humans gain inspiration from each other all the time. Big names in music have often found themselves targeted with claims they’ve taken just a little bit too much inspiration from someone else, although the veracity of those claims often seems to vary with the potential financial rewards of the victor. There have been wars over copyright long before AI, so what’s been happening recently is absolutely no surprise at all. However, in the eyes of many, the difficulty is that AI works on a rather different scale, given how fast an AI can daub a virtual canvas with virtual paint in the style of more or less anyone.

The problem is core to any encryption device: ultimately it must be possible to use the content. In the case of a DVD that’s an algorithm; in the case of Glaze it’s something the human brain is expected to be able to do, while an AI can’t, or at least it can’t in very specific situations. That’s a sufficiently complicated situation, involving such a new technology, that any attempt to come up with a pithy conclusion risks being made to look very silly in, oh, about 2027. For anyone brave enough to have a go, there’s an area reserved below this article for remarks. For everyone else, it’ll be fun finding out.

Tags: Technology

Comments