<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

The scary new frontier of 'making money on YouTube' schemes

Automatically generating images:
3 minute read
Automatically generating images: Shutterstock

Automatically generated content on YouTube might be the worst thing that could happen for everyone that uses it.

Isn't YouTube amazing? You can find personal tuition about anything from keeping a parrot to quantum physics. If you find the right "guru", you can learn so much for a small investment in your time. One of my favourites is Rick Beato, a consummate musician and equally great explainer. He's taught me more about practical music theory than anyone in my life. There's someone like him for almost any subject.

So, now, consider this:

"Start your own money-making business on YouTube for zero effort!". 

That sounds like a scam, doesn't it? You don't need a very sophisticated radar to know that.

I saw it on Instagram, accompanied by a short video showing how to do it. And the thing is - it's not a scam.

Here's how it works

First, get an AI Large Language Model (LLM) to write a ten-minute video script on a subject of your choice. Let's say gardening. You literally have to ask it. This time last year, that would have seemed impossible. Now, it's routine.

Then, feed your script into any of the services that use AI to generate a voiceover.

Then, feed that near-perfect-sounding audio into one of the services that use AI to generate an almost totally convincing, lip-syncing avatar. Then, ask it to give you an alpha channel for the background.

Go to any of the AI generative diffusion models and ask it to produce a background of a large greenhouse lit by a golden evening sun.

Composite the avatar and the background, and there's your video ready to upload to YouTube. Tomorrow, you can do precisely the same thing with the AI choosing a different gardening topic and writing another script.

Part of me wants to applaud this because it's a fantastic demonstration of technology accessible to anyone right now. I love the idea that we've come so far. But the more I think about it, the more downsides I can see. Some of them are downright dangerous.

Part of the problem is our willingness to defer to authority. Particularly in the UK, we're prone to accept what we're told if it comes from someone with an upper-class accent. We're now in a position to quantify this. Just make two versions with the same script, one with an avatar equipped with a posh-sounding accent and the other with a regional brogue.

Have you ever noticed how many adverts for make-up, washing up liquid or vitamins feature scientists in white coats, often with a complicated lab set-up in the background? If the topic is chemistry, who are you more likely to believe - the "scientist" or some guy in a t-shirt and jeans? Never mind that the "scientist" in reality is just some actor, usually in a T-shirt and jeans.

Three concerns for starters

Plenty of things worry me about this.

First, the idea that you can set up an automated process to generate convincing-looking content about serious subjects, but without editorial control, is a slippery slope to mediocracy at best and dangerous misinformation as not just the worst outcome but a likely one.

Second, it's possible that as AI improves, it will self-correct to minimise factual errors. But what if part of the automated process includes a feedback loop, where popular (i.e. "liked") posts lead to more intense coverage of that subject? That's not necessarily bad, but it will amplify not only popular material but material that panders to what people want to hear, as opposed to what is actually true.

Third, the more automated the process becomes - and there are already AI agents that connect with AI models like GPT-4 - the more it can be given highly abstract goals and completely nail them. Imagine what that would mean for an election campaign - the ability to automatically make personalised video content, presented by an authentic and realistic-looking human, that contains your name and caters to your known tastes and prejudices.

Apart from the implications for democracy and society, it would mean that the value of sites like YouTube would be hugely diminished. And maybe this is a metaphor for the world in the era of generative AGI. Humans can be wrong, they can be inaccurate, and they can be misleading. They can be careless, negligent, deceptive and dishonest.

But we know that already. We know how to deal with it. And in amongst all that imperfection, humans can still be absolutely brilliant. And when they are, it shines through on a medium like Youtube.

Tags: Technology

Comments