<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Over the edge - what edge computing is going to mean for filmmakers

4 minute read
Image: Shutterstock.

Replay: Edge computing is a key technology for the future of cameras and filmmaking. But it's almost never explained in those terms. Here's what it is, and why it's so important.

For the last couple of decades, broadcasting and filmmaking have been in a constant, rapid state of change. At the centre of this flux and a huge causal factor is the convergence of telecoms and computing. Once audio and video become digital, our images - the result of our creative activities - are represented as numbers. As such, they become as amenable to being processed by a computer or being sent over a network as a a spreadsheet or a word processing document.

Digitisation has brought benefits that would have seemed like magic only thirty years ago. Perfect digital copies. Digital processing that increases in sophistication by the day. And now we're entering an era of AI and Machine Learning, where we can achieve levels of performance that were literally impossible only a few years ago.

Powerful digital tools call for powerful computers. The heaviest-duty tasks have largely migrated to GPUs, while - especially in the Apple universe - their ARM-based systems-on-a-chip incorporates silicon that is designed specifically for AI duties.

In fact, the the total computing power in today's iPhones (and other smartphones) rivals that of high end laptops. There's no shortage of processing capability in phones, thanks to huge advances in chip design, what remains of Moore's law (it hasn't quite gone away yet) and the incredible power efficiency of the ARM architecture.

But even if a camera were to be as capable as a smartphone - and there are good reasons why they're not - there are some (mostly) unforeseen challenges for filmmakers and cinematography that will require orders of magnitude more computing to solve, especially as we move further into the metaverse.

So what is "edge computing"?

It's a slightly baffling term if you're accustomed to thinking of the internet as being "everywhere", because, strictly speaking, something that's everywhere can't have an edge.

As you know, the internet is the means by which connected computers and - increasingly - other devices are able to exchange data with each other and with servers in the cloud. Most of the time, the Internet works at a level of abstraction that means we don't have to think about the physical nature of it. But, whatever else it is, the internet isn't nothing. It is, in fact, made up of computers - usually in racks - housed in anonymous-looking data centres. Fast networks connect these computer warehouses to the rest of the internet via routers that essentially read the IP addresses on the packets of data and send them to the next router, in a process that continues until the electronic parcels reach their intended destination. Eventually, each chunk of data reaches the final part of its journey before arriving at its intended recipient. This final stage, or "last mile" as I like to call it, represents the "edge" of the network.

It's a special place, because it has high bandwidth connection to the rest of the internet, but is physically very close to the end user. Does it matter? A lot of the time it doesn't, but some of the time it really, really does.

We probably feel comfortable with the idea of the cloud as the source of services like Gmail, Facebook, Zoom and Dropbox. What happens beyond the user interface for these products doesn't really matter to us, as long as they just work. That, in essence, is the cloud.

It's all fine, until it isn't fine. There are limits to the cloud. As you'd expect, those limits are being quickly eroded by the rapid evolution of all the contributing technologies, but there's one issue that will never be solved: it's the speed of light. For all the gloriousness of the cloud's ubiquity, if you're on one side of the world and you're in a transaction with a server in a data centre on the other side of the world, there is going to be a certain delay, or "latency" in your interactions with the distant computer.

Latency

And it's not just the speed of light. Each time a packet transits through a router, there are delays. The data has to be read in, processed and then output. Each of these might involve buffering, and of course it is cumulative. It can easily add up to a second or more, depending on network conditions.

Imagine you're trying to drive a car and that your steering wheel had to connect with the wheels via the internet. It wouldn't end well. With an unpredictable delay or "latency", it would be impossible to control the car. It would be a disaster.

But what if the connection was solely via a computer located at the "edge" point nearest you? The results would be far more predictable and deterministic.
Operating systems and WiFi routers introduce delays. to avoid this, in future, devices that need real time interaction with the internet and processing within it, will need two things: a direct 5G or better connection, and computers available to process information at the nearest edge location to them. This achieves - above all else - lower latency and a more deterministic response. 5G can not only achieve extremely high bandwidth wireless connections, but with a very low latency too.

While driverless cars are being fitted with increasingly powerful on-board computers, a large part of their capabilities will come from knowledge or awareness from outside the field of view of the cars' own sensors. Perhaps it will be from other cars, via the internet. (Cars will also have vehicle to vehicle communication but for the whole picture there will need to be in internet connection too).

For filmmakers, well, this is where it gets exciting and weird at the same time. There are two things to notice here. First, future 5G versions will provide enough bandwidth for almost any kind of wireless data transfer - including raw video. Second, it should be possible to send video to an edge server from a camera, have it processed and receive it back in the camera within the space of a single frame.

That makes it real-time. Remember that it's now possible to use virtual workstations in the cloud: powerful computers complete with GPUs and even AI specialist processors. Imagine if all of this became available at the edge. It would mean that there is almost no limit to the amount of processing that could be carried out in real time and fed back to the viewfinder of the camera. Special effects, computer lens correction, super-sophisticated object detection for metadata creation and autofocus and even real-time interactions with the metaverse for shooting hybrid live and virtual action scenes. Any process you can imagine - all taking place as if your camera itself had that processing power.

Cameras will be able to package and outsource their "personalities" to an edge computing resource. Frame.io's Camera-to-Cloud initiative perhaps paved the way for this but thanks to edge computing, remote real-time processing is likely to become the norm.

Will it ever happen? It already has. Amazon AWS has a service called "Wavelength" that places powerful computers within a low latency 5G zone. It's up and running, now. It's just waiting for applications. And over the next year or so, we will start to see them.

Tags: Technology Futurism computing

Comments