New standards and equipment bring more workflows to learn in a seemingly endless procession. But does it really have to be this way?
I've always hesitated to write much about workflows, because there are as many different ones as there are people working with video. But, actually, that's the point of this article. Why is it that there are there so many workflows?
Well, part of this is because the 'Digital revolution' isn't something that has happened all at once. Different aspects of filmmaking have gone digital at different times, at a rate that's been dictated by the available technology.
A long time coming...
'Going Digital' has been going on for decades. Incredibly, some of the digital processing techniques that are in use in today's studios and edit suites were invented in the 1920s and 30s. They were only theoretical then, and it was only in the 60s and 70s in forward-looking research establishments like IRCAM (Institut de Recherche et Coordination Acoustique/Musique) that computer music composition programs started to produce real results – even if it took a week of number-crunching on those early computers to generate a few seconds of synthesised sound.
Media production workflows have always existed. But for a very long time, when film and tape were the only media for storing and reproducing audio and images, nothing really changed. Life was simpler then: there was no need to 'decode' film; you could see the images with your own eyes. If a wire carried an audio signal, you could plug it into an audio device like a mixing desk without a second thought.
The antecedents to digital video formats were analogue rotating-head tape machines. These provided a mechanism to store the sort of data rates that you'd need for digital video recording (albeit highly compressed) later.
Early digital VTRs often used component analogue to transfer video between machines – and this made a lot of sense, because it was a pretty universal format.
SDI took over and is still in abundant use around the world, but only, of course, where your video is in the form of a signal, rather than a file.
Tape, whether digital or analogue, was here for a long time too, until memory was cheap enough to be able to store more than a few minutes of video. At one point, everyone thought cameras were going to use disk formats – either hard disks or removable ones with similar technology to CD-Rs. But this idea was quickly eclipsed by the arrival of affordable flash memory. This was a good thing for so many reasons, not least that it meant that complex tape and disk mechanisms could be banished from camcorders, where they were intrinsically unsuitable because of the mechanical punishment that they were subjected to.
The digital dimension
Digital workflows are inevitably different.
Unlike film, digital images don't need developing. You can re-use storage. There's no wastage. And so there isn't the same type of financial pressure to limit the number of takes, or to "get it right first time." With digital cinematography, shooting ratios can be significantly larger than with film, because there's no reason to not just keep shooting until the very best shot is in the (digital) can. The result can be thousands of hours of footage, where, in some systems, each frame is an individual file.
Here's just one illustration of how digital workflows can get complicated.
If a film is being shot at 4K or higher resolution, then lower resolution copies will have to be created, because fewer devices can play back material at its native size. These "proxy" copies (rushes, dailies, etc) can be sent to everyone that needs to see them over a network, or even emailed to them, if they are small enough. Meanwhile, the original, full-resolution files will need backing up in several places. In addition to this, there will inevitably be look-up tables to think about, raw camera files vs ProRes and any number of nuances around what 'look' is visible on set and what is passed for reference to the colourists.
With film, it's easy to write on the can or even on the film itself, but you can't 'write' on a file. It's essential to give all files a meaningful name in the context of the project; and by 'file', we don't just mean 'take': we mean every version of the file, as well.
Still more versions and copies will be sent to colourists, CGI artists and audio dubbing mixers. Some clips will be turned into early teaser trailers and, as the production reaches completion, versions will be produced for DVD, Blu-Ray, iPods, Apple TV, Broadcasting (in HD and SD), cinema trailers and digital signage. Ultimately, versions will be created, suitably watermarked, for distribution to digital cinemas. Other copies will be used to create film distribution masters.
There is almost no end to the number of copies and formats that a feature film will spawn as it goes from production to distribution.
Complexity & complications
You can see from the above that there is, in reality, no such thing as a simple change from analogue to digital. What might seem like a single paradigm change – and one that most of us think we're almost completely on the other side of – is actually a myriad of smaller changes that overlap. Movies are still made on film sometimes and are digitised at high resolution for Digital Intermediate (DI) processing. And then, still in some cases, they are printed back to film for distribution. Digital and analogue co-exist; they have and will continue to for some time into the future – although all-digital workflows are now the norm.
But in reality, it is not the analogue vs digital question that causes most issues today. What causes more problems is that, within the digital domain, there exists a huge variety of methods for processing content, storing it, moving it around, compressing it and delivering it.
It is becoming clear that, even in an all-digital domain, there are more variables in the production chain than there ever were in the analogue world. To make matters even more complicated, this is not a stable situation. Every time a new format appears, it creates a ripple effect where other parts of the production system have to change to accommodate it, whether it is a simple change of resolution, or a new way of working, like with 3D or HDR.
Think, for example, about the techniques used in the current wave of Motion Capture/CGI blockbusters, where live-action sequences, rendered into CGI 'actors' are composited with CGI backgrounds. In these scenarios, there can be terabytes of live action, texture and MoCap material and millions of files.
Will we ever get over this period of intensive change? On the basis of the evidence so far, the answer is, probably, no. New ways to acquire and consume media are appearing faster than ever (just consider Virtual Reality) and the lifetime of any given media format seems to be getting shorter.
But it's not just the field of content production that the rate of change is problematic. We are also in the throes of another revolution: the consumer one.
Viewing habits are changing in multiple directions, at a furious pace. At the same time as high definition is reaching near total adoption, demand for lower resolutions is booming, driven by the availability of connected, smart devices. 4K and, only just over the horizon, 8K, will continue to require changes in workflows and content delivery.
In future, customers will search for and pull content from wherever they need it. This shift has very big implications for content suppliers, because it means that consumers will, literally, be connected to the supply chain.
Despite an almost instinctive hope that Digital will have the same transparent simplicity as Analogue, the harsh reality is that it doesn't. Even a damaged analogue signal can at least be made sense of, but a damaged digital signal will probably be unreadable. It is a bit like having a map of Manchester in your car's GPS system when you're actually driving around London. If your understanding of how a digital file is encoded doesn't match the file you are given, there is nothing you can do with it: it might as well be a bunch of random numbers.
Even if there is no issue with understanding the digital information, there is another problem.
With technology changing so quickly, it is actually quite unusual for the workflow for any two productions to be the same. There are good reasons for this, but they don't make up for the fact that as long as ad hoc workflows exist, they are going to be expensive and inefficient to set-up and to change, ready for the next production.
Digital workflows tend to be ad-hoc because no two productions are the same. They don't have the same technical requirements and the creative ethos for a given production piece will influence the technical set-up. Most productions employ freelance specialists, who will each have their own way of doing things. It is also very likely that each new production will incorporate some technical innovations (a new format, a new colour 'look' or a novel way to integrate live action with CGI).
So, for very good reasons, no two successive workflows will be identical.
From a forward-planning point of view, two workflows don't have to be very different before they might as well be completely different. You might think (although your instinct immediately tells you it will never work) that because a dog's head is superficially similar to a cat's, you could readily swap them over. After all, they both have two eyes and ears, a nose, mouth and a neck. But we all know that this can't happen. Superficial similarity is completely overridden by deep and numerous differences.
Often, it is quicker to start again from scratch than to adapt an existing workflow.
But does this always have to be the case? Are we always going to have to invent a new workflow for every new project?
Yes and no. We already have incredibly useful tools for exchanging media. MXF, AAF (and any number of XML -based exchange mechanisms for specialist aspects of post production) mean that we can take complex projects, exchange them with other software and devices, and back into a final composition, with relative ease. I say "relative" because while it's much easier now – in that it is possible at all – than before, there are still big gaps in usability and compatibility.
For example, it's now quite a common requirement to take a project out of an NLE and put it into a grading system. And then bring it back into the NLE or some other finishing system to complete it. This works seamlessly up to a point (which is to say that it doesn't quite work seamlessly!). There will, it seems, always be some part of the project that doesn't get carried across.
That's because you're always, in the broadest sense, translating between languages. There are always things that are easy to translate like the basic metadata of a clip – in and out points etc. And then there are things that don't translate as well (these are like the 'idioms' in a real language. Try translating 'I rest my case' into French literally and you get 'Je repose ma valise', which is, to a French person, nonsense.)
There is plenty about media exchange that is idiomatic and sometimes these nuances have to be translated manually.
Will there ever be one workflow that fits all? I very much doubt it. And that's not such a bad thing, for two reasons.
First, it's good to be flexible. Who knows what's coming next? The more rigid we make our rules for workflows, the more likely we are to be caught out by the next innovation, that may be too big to be ignored.
And secondly, we're getting better at this stuff.
When you listen to post production professionals today, their everyday discourse would be almost incomprehensible to someone from only twenty years ago. Of course, things haven't stood still, and they won't, but we are definitely well and truly into the currently digital paradigm, and we're pretty comfortable with it.