<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

When the power goes down and you lose everything

3 minute read

Shutterstock.comModern camera do not take kindly to this sort of thing...

If you unplug a traditional tape deck while it's recording, the recording will be perfectly usable up to that point. If you do the same thing to a modern camera, though, you're likely to lose the whole take.

It's a fairly simple situation. There's a file on the disk representing a take. It consumes an amount of disk space that's reasonable given the amount of time recorded before the power cut. If we look at the contents of the file using programmers' tools, well, it looks pretty much like a Quicktime movie. The same can be broadly true if we're using a system which records some variety of MPEG-4, as files named with the .mp4 extension are set up similarly to Quicktime movies; MXF can have much the same problem, depending now it's been written. There's a big lump of data there, but players won't play it and editors won't edit it.

To understand what's going on, we need to know about how Quicktime files work. They're similar to the RIFF format used in AVI and wave files, and even the IFF image files used on Amigas back in the day, and the AIFF audio format. The idea is to separate different parts of the file, such as information about the sort of compression in use, timecode, and other metadata, from the picture and audio data. Each of these parts (called chunks in AVI and wave, and atoms in Quicktime) has a four-character ID code and information about how long it is.

The purpose of this is that a player can ignore things it doesn't recognise, which allows for the format to be extended with new capabilities without breaking compatibility with existing software. This is how the Broadcast Wave file format works; BWFF files are still legal wave files, they just include a new area containing timecode and other information. The four-character code for the new area (“bext” in the case of a broadcast wave) won't be understood by conventional wave file players, but they can still read the length of the unknown chunk, accurately skip over it, and still at least play the audio.

This presents a problem, though. If we're recording a file live, we don't know how long the file is going to be until we stop recording. To handle this, a recorder usually has to write the video data without its length information, which may mean it's set to zero, then go back and modify that data once the recording is finished and the length is known. The problem should now be obvious: if we never get around to finishing the recording properly then the actual video data will be there but the length information will be wrong, often zero, so it can't be played.

This suggests we should be able to fix the file by setting the length field to something other than zero, right? Well, no. In a Quicktime file, the atom which contains the actual video data is called “mdat.” So we can skip through the file, as we do when we move the slider around during playback, there's an atom which contains an index to the points in the file where each frame is recorded. Traditionally, that index was placed near the start of the file, but if we are recording a live take we don't know how long the index will be. We can store up information about all the frame positions and write the index at the end, but if the recording is interrupted this won't happen and the file will be unreadable. There are sundry other, similar problems.

It's actually much easier to fix unfinished (call it “truncated”) AVI files, because AVI files store each frame in its own data area. AVI files also have an index with a list of frame positions, and we can work out where they are by reading the AVI structure, the four-character codes and the chunk lengths, and building an index. It's a comparatively minor task in software engineering and lots of tools exist. Quicktime is a bit more difficult, because it (generally) stores all of the actual video data in a single atom. There are reasons for this, but it complicates recovery.

To fix a truncated Quicktime movie, we have to search through all the frame data and figure out where the individual frames are. This requires more than just knowledge about the file format. It also requires information about how the codec (such as ProRes, DNxHD, etc) compresses frames, so we need additional information about what the file is supposed to contain. This is why Quicktime recovery tools sometimes ask for an example good file, recorded by the same recorder using the same settings. Figuring out the frame boundaries in the codec data is not necessarily an exact science, since compressed data tends to look like an alphabetti spaghetti of meaningless numbers, but it can often be done.

Ultimately, this is one reason the industry hung on to recording DPX file sequences for so long. It's always possible for a power failure to corrupt a whole disk full of data, but it's unlikely, and with a frame sequence the maximum we'll lose is one frame. So, is there anything manufacturers could do to improve this situation? Well, we could make it easier to fix by recording a second file simultaneously with the index information. A recorder could delete these files once the recording was complete. However, if it were to find these files on startup, it'd be an indication that the associated Quicktime file was truncated, and the recorder itself could take measures to (quickly) rebuild the file. This sort of thing has been done, but it isn't as universal as it could be.

(Technical note for serious nerds: there are probably ways of doing this by embedding new private atoms in the Quicktime file itself, but this might lead to a file with multiple “mdat” atoms, which can itself cause problems with less-than-ideal players.)

Tags: Production

Comments