While it remains critical on set, Phil Rhodes asks whether in 2015 there are better solutions than continuing to use timecode in post.
It's been decades since we started identifying the individual frames of a moving-image recording with timecode. The complexities caused by fractional frame rates in NTSC notwithstanding, it's a fairly straightforward system: eight digits to identify the frame (and optionally other information). Even the standard for transferring it between equipment is a fairly basic digital encoding, which currently can be handled by cheap micro-controllers. In the modern world, including timecode in a high-bitrate media file is absolutely trivial, at least from a bandwidth point of view.
It's slightly surprising, therefore, that it's often so difficult to get a collection of production and postproduction equipment to agree on how timecode should work. Back in the day, it was relatively straightforward. SMPTE-12M is the standard which describes how linear timecode (LTC) works, and in the 80s, a piece of video equipment either had LTC support or not. Since LTC is a fairly low-bandwidth signal intended to be recordable on analogue audio tracks, it was even possible to sacrifice one of a stereo pair to add LTC recording and playback capability to almost any piece of equipment (and it still is, which we'll talk about anon). Many VTRs of the 1980s did exactly that.
Currently, though, timecode handling has been complicated enormously by the flexibility of digital media and the ability it gives us to quickly develop and deploy things which, previously, might have been the subject of a multi-year standardisation procedure. Nealry all digital media files can handle timecode: Quicktime, MXF, and AVI movies can all do it, as can DPX still frames. A key capability of the BWFF extensions to wave files is the ability to carry timecode or at least a samples-since-midnight count, having a similar purpose that is equivalent in many circumstances. The problem with all this is that software must understand the way in which the timecode has been stored, read it properly and, critically, it must be passed on to other software via intermediate formats.
One example of this situation causing problems concerns recent versions of Adobe's Premiere Pro editor and Blackmagic's Da Vinci Resolve grading software. Consider a project involving ProRes Quicktimes acquired, say, on an Atomos Samurai Blade recorder. This is a common situation involving widely-deployed technologies. The recorder creates timecoded Quicktimes and both Resolve and Premiere Pro know how to read that timecode. What can possibly go wrong? Well, the problem arises because the only way to get a Premiere timeline into Resolve is to use Premiere's facility to export an XML file in a format developed by Apple, for Final Cut, which seems to have become something of a de facto standard.
This works reasonably well, at least for files taken straight from the recorder that have been cut directly onto the timeline. Unfortunately, any other file, such as a title, visual effects shot, audio such as music or any other synthesised media may not be timecoded and, as a result, are likely to fail to conform properly. Now, we shouldn't unfairly single out Premiere and Resolve for scrutiny here; there are lots of other pieces of software in which things don't quite come out right. The approach of using one piece of software to produce a file that appears to be from another, in order to keep a third piece of software happy, should seem like a bit of a workaround to most people. The issue is why we have to do this and whether these problems are avoidable.
We already have a better way.
The crux of the matter is that this sort of thing should never have to happen. It's easy to say that software should all support the relevant standards. But actually, it isn't really even a standardisation issue. The key realisation is that we actually don't need timecode on computers. Digital media files have unambiguous start points, so a filename and a frame or sample count is sufficient to uniquely identify a chunk of footage or audio or at least as uniquely as an eight digit number (give or take a date encoded in the user bits). Many basic editors don't read timecode from files even if it's present and are perfectly capable of creating, storing and rebuilding timelines.
The technical complexities of supporting timecode in all the formats the world likes are non-trivial. However, all software must, intrinsically, support filenames and identifying pieces of media by sample count from the file inpoint is utterly trivial. As opposed to requiring specialist tools, as are required to, say, change the timecode in a BWFF file, filenames can be changed by tools available by default on every operating system. Being forced to eyeball-conform dozens of VFX shots, because software does not support this elementary approach, is a fast way to frustration.
Naturally, one would not wish to provoke some sort of crusade to see timecode exorcised from film and television work. It's still often the most convenient way of synchronising dual-system sound, where the very point at issue is that the file on the camera and the file in the audio recorder don't have a common start point, so simple filename matching (assuming the audio and video for a take had relatable filenames) is insufficient. While timecode gear can be costly, even audio recorders that don't ostensibly support timecode can be used with it simply by connecting the LTC output of a synchronising device to an audio input. The LTC signal is often at very high level and may require attenuation for proper recording, though this is relatively easy to do and can be made compact enough to fit into the back of a connector.
Moves to make this easier, by creating software to automatically convert stereo wave files containing LTC as audio data to mono BWFF files with meaningful timecode, are afoot. Timecode as an on-set tool remains crucial. But in post production, it seems a shame that we're still so exclusively bound by it.