<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

A short history of editing

3 minute read

RedSharkShort history of Editing

Editing has been around in one form or another for over 100 years now, and non-linear editing for over 20 years, but where do we stand at the moment and where are we going?

The first ever edits were made accidentally, by cameraman stopping and starting early hand-cranked film cameras. Gradually it was realised that these “cuts” could be used to join shots together to tell a story, and the movies were born.

Joining shots to tell a story

Essentially editing is still the same thing, joining shots together to tell a story, but the technology has changed immensely, especially in the last few years.
I started out as an editor in the 1980’s working in video tape suites. Because tape editing was linear, if you needed to make a change that affected the duration of the shot you had to redo everything beyond that point. Needless to say, a lot of changes didn’t happen.  

I remember when I first saw a non-linear editing system at the end of 1991, although the picture quality was dreadful and the capabilities were very limited, I could see that this was the way editing would have to go.

Offline vs online

At first non-linear suites were only suitable for offline editing. The rushes were digitised at a low quality, the editor worked with the director to create the programme, then an edit decision list was used to “online” the programme in a linear suite, auto-conforming the material from the camera masters (if you were lucky) The fastest way to do this was called C-Mode, where all the shots from each source tape were recorded onto the master at the right point in the programme. It was a fascinating if slightly scary process, as it was only when the last shot was laid down that you knew you had your finished programme.

Gradually, as computers have become more powerful and advances in video encoding technology have given us better pictures at lower bit rates, the concept of offline and online have disappeared, with the editor able to work with the material at full quality.

The early days were a struggle

Whilst we struggled with the power of the computers in the early days we finally got to a point where they could work with broadcast quality material at standard definition, just in time for the advent of HD. Now I see systems happily running multiple streams of HD but of course now we have 4K to contend with.  New codecs like XAVC and H265 will give us the ability to work at resolutions we wouldn’t have thought possible.

The other thing that has happened is the inclusion of capabilities that often used to happen elsewhere, nowadays most editing packages include colour correction, audio mixing, video effects, even 3D capabilities, so the systems are truly a one-stop-shop for making anything from YouTube clips to feature films.

CPUs and GPUs

The speed of CPU’s has stabilised at around the 3Ghz mark but multiple cores are now the way to go, with workstations using 8, 12 or even 16 cores to power the system. In the meantime the rest of the system has improved, with advances in bus speed, memory speed and disk access giving greater throughput for all those streams of video.
Probably the biggest change in the last couple of years though has been the adoption of GPU acceleration by many of the software packages. GPU acceleration now allows us to see things in real time that used to take hours to render, freeing the creative process to experiment and find the best combination of shots and effects.

Great facilities and no time in which to use them

I am jealous of all the new editors out there, starting to work on these systems with none of the restrictions I used to live with. But there is a downside.
With shrinking budgets and ever tighter deadlines the modern editor seldom gets the time they need to do the best job. Worse still; the advent of tapeless cameras mean that often the first time anyone gets to see the material is when the editor opens a shot in the viewer. At least when we had to digitise from tape is was a chance to see the rushes and start thinking about how we were going to cut them together.

Head in the cloud

Hopefully there is another recent development that could improve the situation. With the advent of cloud based storage and high speed connections to mobile devices I foresee a time when cameras are loading media to the cloud as they shoot. The editor can view and “offline” the material on their mobile phone, tablet or augmented reality glasses whilst sitting in Starbucks or on the beach. When the full resolution material is loaded onto the editing system the editor could operate it remotely from another computer, possibly collaborating with another editor or director on another continent.

This might sound like science fiction but Adobe had a demonstration of something very similar at IBC last year and JVC have recently released cameras with media streaming capabilities. By removing the time it takes to get the media to the edit suite and giving that time to the editor to view the material and think about the edit we should see better edits and less stressed editors.

As the quality of images from digital cameras improves and the speed of computers gets faster and faster we will soon have the ability to edit feature films at full quality on equipment that you can buy in your local technology store.

Just as being able to use a word processor doesn’t make you an author, being able to shoot and edit won’t make you a filmmaker, but it will open up the possibility for anyone with an idea, and the drive to succeed, to make their movie a reality.

Tags: Post & VFX

Comments