RedShark Replay: NLEs (Non Linear Editors - computer editing systems) have transformed the creative process since they were first introduced a quarter of a century ago. But how do they actually work?
Most of us working in the video industry have used an NLE (Non Linear Editor), but have you ever wondered how they actually work? After all, what you see on your computer monitors is really very far removed from the processes going on inside your computer. And that's a good thing, because we prefer to deal with pictures, not numbers.
The term "Non Linear Editor" is quite revealing, if you think about it. Before computer editing, there was no need to make a distinction between linear and non linear editing, because all editing was linear, in the sense that it was a matter of locating content on sections of media that was either film or tape. You couldn't just jump instantly from one point to another without going through all the points in between. Admittedly you could do this in fast-forward or rewind, but what you specifically couldn't do was jump instantly.
But with Non Linear Editing, you can jump quickly, or at least in the space of less than a frame (with video) or a sample (with audio). And that's a crucial difference, because it means that you never really have to alter your original material to create an edit. This is called non destructive editing.
The fact that you can move from point to point instantly means that you can leave your original material untouched and create an edited piece by effectively leaping like a butterfly from point to point.
So how is it that non linear editing is so graceful, so agile, and just so useful? How is it that we're able to create complex projects and have them play out in real time, seemingly without having to move gigabytes of date every time we make an edit or tweak an effect?
Well, it's partly because of the difference between playing a record and a tape, and partly because anything electronic or digital moves so fast.
When you play any type of disk, it's easy to move the "pickup" to anywhere on the platter. There will always be a bit of a delay, but that's not an issue if you know in advance where you have to move to - and if you have a bit of memory to spare.
Let's say you're playing some material on a "track" (remember, this is the simplified version) and you want to "cut" to some material on another track.
There are two possibilities here. One is if you can move your playback head so quickly that you're ready to play the incoming frame before the outgoing frame has finished displaying itself. That's, frankly, unlikely. The other is if you know in advance that you're going to have to play that next clip, and have taken the sensible precaution of storing - or "buffering" some frames from the outgoing clip in memory.
If you do this, it will give you time to move the playback head to the start of the next clip in time to read the data and be ready to play it - as soon as the last frame from the outgoing clip has finished playing. As far as viewers are concerned, this will be a seamless cut - with no dropped or frozen frames. And that's exactly what you want.
If you get this right, you will, in theory, be able to splice any clip from anywhere in your storage system, to any other clip, without any indication that the two pieces of media are not actually adjacent to each other.
Now, to add a little bit of extra flexibility to the system, most NLEs use Tracks (even if not explicitly so). Using tracks makes it easier to see what's going on in complicated edits and sequences. It also helps when applying effects, because it imposes a hierarchy between clips where, for example, the top one might be given priority over lower ones.
So how do you make video clips appear on different tracks?
It's relatively simple. You essentially just "tag" them. You say, in the NLE's database, "Whenever this piece of media is called up, put it on track three". That's unless you change it, of course. Tracks aren't real things at all in the world of NLEs.
It's the term "Database" that's absolutely key here. It may sound strange to say it, but your NLE's timeline is just a graphical view of the database that you refer to as your "Project". The project is really not much more than a database that keeps track of your clips and your edits. It stores the name of the media, the number of frames, the position of any In and Out points, the track, and in all probability a stack of other data that will help in the complex task of sorting an apparently random pile of media into a polished edit.
It does get quite a lot more complicated than this, because we don't just work with whole clips. The process of editing is made a lot simpler because we can create subclips and sequences. A subclip is a section of an original piece of media that has been "trimmed" to cut out any stuff that we don't want to use. Imagine a long interview in a single take. It's unlikely that we'll want to use all of it, but we might want to put several sub-sections together into what looks like a continuous interview. So we create Sub Clips from all the parts that we want to keep, excluding the bits that we don't want in the edit.
Each subclip in the system behaves almost exactly like a real clip - which is surprising, because they don't really exist except as some numbers in the database. They don't exist, that is, as separate pieces of media. This has enormous benefits, the biggest of which is probably that there is no need to create "new" media clips to form the basis of the subclips. Instead, the subclips are just playback points that are selected almost in real time when the project is played. They have no weight or mass, and they are incredibly flexible.
At the other end of the scale, are sequences, which are assemblies of either full clips and subclips. You can pretty much think of sequences as "Edits" that can be moved around themselves like clips, but if you put all your sequences together, you'll have a finished program - or at least all the ones you want to use.
What's really clever, is that with most NLEs, you can put subclips within sequences (of course!) and even sequences within sequences.
If that sounds complicated - and it is for the developers that write the program - it really all boils down to your timeline representing an Edit Decision List (EDL) which is the list of frames within media clips that the NLE jumps to while it's playing back a project.
So, in a sense, an NLE is both more - and less - than it would seem. It's less in the sense that it is merely a graphical representation of a database of media objects and edit points; and it's more in the sense that the sum of these elements adds up to something that is truly wonderful. It's something that can take a pile of disorganised media clips and turn them into a feature film (with the help of a talented human editor, of course).
There's a lot more to it than I've described here: Effects, codecs, media management, project management and a host of other complex technologies that have developed over the last two or more decades.
But even after around twenty-five years, the basics of NLEs have stayed the same. And that jump in capability from linear tape and timecode editing to Non Linear is just as big, and just as remarkable.