This is not a story about the democratisation of filmmaking. It is about how end to end software-defined channels are removing the old constraints around resolution, frame rate and aspect ratio.
We have seen so much change in filmmaking technology and culture in the last ten years that it’s easy to overlook some of the biggest thresholds that we’ve crossed. To fully understand this, we need to go back just far enough to remind ourselves what it was like before digital video and before the internet.
I’m guessing that time would be around the start of the 1990s.
Digital video was around before then, but mostly in the labs. I remember my friend, then a research scientist at Thorn EMI, based in Hayes, London, telling me in 1988 about a digital video system the company was working on. I don’t remember the details but what I do recall is a frisson of excitement when he told me the company was designing a video technology that involved compression across time. That blew my mind because at that time I was working with digital audio rather than video and this seemed like a pretty alien concept.
He was, of course, talking about long-GOP compression - the technique of referencing frames in the future and in the past to take video compression to levels of efficiency that were previously unheard of. Given that digital video was pretty much unheard of at all outside of laboratories at the time, that was quite a revelation
The first digital formats
In the 90s, both professional and consumer digital video formats arrived in the form of Digital Betacam and MiniDV (which also had Pro derivatives). But here’s the thing: these digital formats were closely bound to broadcast standards. They weren’t bringing anything new except for a digital emulation of formats that already existed.
It was very much chicken and egg. There was no need to go beyond broadcast formats because there was no means to display anything other than what a “standard” TV could show on its screen. Contrast this with today, when I have on my desk two 32” 4K monitors (one an ASUS 32” colour calibrated HDR device with 1000 nits continuous output and that can cope with most HDR standards). I can watch literally any type of video on my desk, except for 8K, and I expect to have an 8K device soon.
Sony, helped by Panavision, experimented - very successfully - with digital cinematography with cameras like the HDW-F900, capable of shooting a film-like 24 frames per second. It was still quite obviously a broadcast camera, so it was really Dalsa and RED Digital Cinema cameras that broke though the broadcast constraints with the 4K Origin and then the 4K RED ONE and its compressed raw output.
The RED ONE Digital Cinema Camera. Image: RED Digital Cinema.
The computer relationship
The part of this that people rarely think about is computers. Computer graphics capabilities took us beyond broadcast formats. Within reason, if you plug a monitor into a modern computer, there’s a little digital dialogue between them, and it just works at the native resolution of the monitor. It wasn’t always quite that easy ten years ago, and there weren’t many 4K computer monitors about but the point was the picture format was no longer defined by what was being broadcast, but by the size, shape and crop of a digital camera sensor.
It’s hard to exaggerate how big a change this represented. While broadcasters had to stick with 720 or 1080 resolutions, content creators could break away from these legacy restrictions. Here’s how Graeme Nattress, RED Digital Cinema’s Colour Scientist, put it when I spoke to him in his hometown of Toronto, three years ago:
“At that time, the biggest thing holding everything back was ‘broadcast standards.’ With analogue it was obvious that everything had to tie into the same standard or it would never work. With the introduction of digital non-linear editing we saw that wall start to crack. The mess over the HD standards (a mix of 720p and 1080i, different sizes and frame rates and interlace, along with the historical mess that is non-integer frame rates and the associated drop-frame time-code) didn’t help. Once video was computer data, moving images could be any size and any frame rate within the power of that computer system. Just as film was not tied to a distribution format (so was output agnostic, and could easily be made to work with any TV broadcast standard around the world), computer based video didn’t need to be tied to a particular standard either. At the point of RED ONE launch (or even before), it therefore made sense to make the image acquisition the best possible without regard or limit to broadcast standards.”
And now, there’s another threshold that we’ve passed, which is, to sum it up in one word, is virtualisation.
That’s a rather abstract term, but what it means in this context is that there are no hardware limits to the formats we can shoot in, and the formats that we can view in. If you want to shoot and post produce in 16K 21:9, then there’s little to stop you - at least, if you’re prepared to lash a couple of cameras together and join the images in software.
This might start to make more sense when you have a 16K video wall made up of very high resolution video tiles. You won’t want to watch the news anchor if their head fills the wall and each nostril is the size of a dinner plate - so it seems more reasonable to zoom each image to a comfortable size, where if you want an immersive fly-through of the grand canyon it takes up the whole wall, but for less visually-grabbing material it’s sized appropriately. But you’ll also be able to accommodate very wide (or even tall) aspect ratios.
New AI-based codecs will allow you to decode video files at any frame rate and any resolution (with AI upscaling where necessary). Video resolutions and frame rates will finally be “plug and play”.
I have a feeling that once this type of viewing experience is widespread (and I’d take a wild guess at ten years from now) it will change the way we make content. We’ll be more adventurous; more ambitious. And the days of “broadcast standards” will seem so last century.