14 Mar

Why you should use Avid DNxHD and Apple ProRes

  • Written by 
  • submit to reddit  


Here's another chance to read this article about why it's a good idea to use ProRes, Avid DNxHD, and any of the other production quality compression codecs

But while it’s true that more people than ever can afford to make top-end video, this new capability poses a few questions for the ideal editing workflow.

The pictures from the latest generation of cameras - and that includes DSLRs and dedicated video camcorders - are stunning. No one expected to be able to get pictures this good for so little money.

The quality of the images is all the more surprising when you realise how much they’re being compressed to fit into the internal storage in the cameras. HD video involves almost unimaginable amounts of data. An uncompressed Full HD image (1920 x 1080 pixels) at 30 frames per second generates the equivalent of seventy five copies of War and Peace per second! Cameras compress by as much as seventy times to fit it into in-camera storage.

What that means is that less than two percent of the original data is left in some cases, which makes it even more remarkable that the resulting pictures look so good. Of course, the codec makes sure that the data that’s discarded won’t be missed too much. What remains is the data that really matters.

But even so, when an image is reconstructed from a highly compressed codec, it’s based on extremely fragile information. And sometimes, there’s just too much going on for the codec to cope with, with the result that when it’s decoded, you can see artifacts - areas of the image that have been “damaged” by the compression.

Camera manufacturers know what they’re doing. They don’t want bad images coming out of their cameras, so it’s safe to assume that in most cases, most of the time, highly compressed (whether they’re MPEG-2 or AVCHD - or some other variant of H.264) from a codec inside a camera is going to look pretty good.

Why not stick with these codecs and edit with them?

Well, you can. Most NLEs, including Lightworks, now can handle a wide variety of codecs natively. But this may not always be the best path to take.

There are several reasons why you should look for alternatives to editing with these highly compressed, so-called Long-GOP codecs.

Long-GOP codecs were designed to deliver video. They weren’t ever meant to capture or edit it

The reason that Long-GOP, highly compressed formats were created was to deliver video via DVDs, Blue Ray, Satellite and Terrestrial digital television. And, more recently, through internet downloads. Every single ounce of optimisation that went into these codecs was focused on making the picture look good at a very low bandwidth. And they’re very, very good at their job.

The problem is that almost every technical trick that was pulled to reduce bandwidth made it harder to edit video once it was compressed. The biggest problem is that to be able to see any image at all, the decoder has to reconstruct the image from adjacent frames. Possibly five or six, but equally possibly as many as fifteen. And it has to do this for every video stream - so if you want to preview a dissolve in real-time, your computer’s going have to decode two video clips even though it’s probably struggling to decode one.

All of this puts a big strain on a “normal” computer. In practise it means slow, jerky playback, and fewer effects in real-time. It may also hamper the final quality of your edit, because each time the video is decompressed to carry out an effect, it has to recompress the result, into the same, parsimonious compression scheme.

You can work with Long-GOP video, if you stuff your computer with RAM and it has enough processing power, but this is always going to be second-best to working with a codec that is designed for editing.

« Prev |




Not registered? Sign up now

Lost your password?

  • When Avids DNxHD codecs first came out we ran them up against our uncompressed DS suite and pushed a whole load of material through comparing the results on our CRT HD grade 1 monitor and you just couldn't see any losses in the DNxHD content.
    The only real issue I've seen has been when DNx185 content has been laid off to HDCAM (with it's horrid sub sampling and compression) and then re-captured as DNx185. Artefact's start to appear and quality quickly falls away depending on the content and the original acquisition codec used.

    0 Like
  • You mentioned in this article that "Transcoding to an intermediate codec is not your only option." But I was wondering how comparable (in image quality) is transcoding to intermediate codec against capturing in a intermediate codec.

    0 Like
  • If you capture from uncompressed directly to an intermediate codec, that will be better than transcoding from a Long GOP codec like H.264.

    0 Like
  • Recording from the camera using DNxHD or ProRes is very different that converting AVCHD or H.264 to DNxHD or ProRes before editing -- and you do not make this difference clear. Obviously, other than demanding significantly greate storage, the former is ideal. (But then uncompressed RAWN is even better.)

    No quality is gained by converting AVCHD or H.264 files to DNxHD or ProRes before editing. AVCHD or H.264 must first be decoded to uncompressed data and then compressed to DNxHD or ProRes. Obviously no information that was eliminated during AVCHD or H.264 encoding can, or is, restored during the decode. And, obviously, compressing to DNxHD or ProRes does not restore already lost information. In fact, information is always discarded during compression. So following your idea results in less picture quality.

    What you are missing is that modern NLEs do not "edit with" the source codec. As each frame is decoded it becomes 4:2:2/4:4:4 uncompressed data. Editing is always performed with uncompressed frames. This happens on-the-fly. From a buffer of thousands of data packets, long-GOP decoding utilizes multiple packets of information to generate a frame of video. No modern computer has a problem with either reading buffers from disk nor calculating an HD video frame from information already stored in memory.

    Moreover during encoding these packets are shuffled so they are recorded in exactly the correct order to make decoding very fast. As each frame is decoded the color-space is converted to 4:4:4 so it can be combined with 4:4:4 graphics. (Some NLEs may only convert video to 4:2:2.)

    All streams of uncompressed data are appropriately combined to make a data stream that goes only to the display. During export the same process--starting with encoded data from the disk-- is followed except the uncompressed result is compressed to the "export" codec. AT NO POINT IS UNCOMPRESSED DATA RECOMPRESSED TO AVCHD OR H.264. Thus, files are stored on disk in AVCHD or H.264 files that are very compact and which reduce disk read bandwidth.

    You may be thinking of the old days where after combining uncompressed streams the result was encoded back to the project codec which was the same as the source codec. This was called rendering. When iMovie was released, Apple completely eliminated rendering. The GPU did everything in realtime. Only during export was anything written to disk.

    Some NLE's do use rendering so that very complicated FX can be played back in realtime. And, some will use rendered files to speed-up exports -- although by deleting render files you can force them to start from the source files. THESE RENDER FILES NEVER USE A LONG-GOP CODEC.

    Because the move to 4K requires frames 4X bigger than HD, NLEs do support during import, conversion of long-GOP to "proxy" or "optimized" formats. While this reduces the computational load during editing -- it requires lots of storage and increases the required disk bandwidth. (An alternate approach decodes every other source pixel of 4K data so it is only working with FHD video.)

    0 Like
  • Hi Steve,

    I don't think I did say that you gain quality if you transcode from H.264 to ProRes. Of course you don't. But it may be useful to you anyway if your source codec is 8 bit, because you can then work in ProRes's ten bit space. This won't add quality per se, but leaves more headroom for grading etc.

    And it is always better to work with codecs that are designed to need less processing power. Normally that means intra-frame codecs. Although I'm told that Sony's XAVC is pretty efficient for editing in its long GOP format, and of course in its Intra Frame version. Yes you can do clever stuff now with Long GOP codecs in editing, but it still takes more processing grunt than ProRes.

    To get ProRes directly out of a camera, you'll have to record from HDMI or SDI to an external recorder.

    0 Like
  • Articles like these always make me happy that I left the Mac/FinalCut platform so many years ago, around the time of FCP 5. Transcoding is so Amiga... But I do appreciate RedSharkNews to no end since they seem able to entertain the concept of Windows machines. Over here in America, it seems like every video writer equates Mac with Computer, although since FCPX, many are now coming out of the jungle like the Japanese soldier in the 60s. Remember lads, all you have to do in Premiere is select "keyboard layout" and choose "Final Cut"... Then you can use AVCHD and video capture is "elegant & intuitive" you know, like the Macs were before August of 1995.

    0 Like
  • Remember lads, all you have to do in Premiere is select "keyboard layout" and choose "Final Cut"... Then you can use AVCHD and video capture is "elegant & intuitive" you know, like the Macs were before August of 1995.

    I see this typical scenario as the cause of much of the 'loose practice' witnessed in today's digital acquisition and post production chain: everything from letting the camera run between takes, a lack of understanding of the benefits derived from working with intra-frame codecs even it that means transcoding and a total ignorance of timecode which is still the 'glue' that ties together an efficient post production workflow. Problems often arise when those skilled in the DSLR/Mac/Premier/Youtube workflow progress on to a more professional scenario where they are spending other people's money and the stakeholders wish to exercise their right to see that their investment is spent wisely.

    A quick perusal of some high end cinema and broadcast forums like Lift/Gamma/Gain will reinforce the fact that since the dawn of the Mac 'Trashcan', more and more top shelf DaVinci Resolve workstations are being built on Win PC i7 and Xeon platforms. Faster, cheaper and easily re-spec'd as the PC hardware evolves. Perhaps the film school 'blinkers' are slowly coming off...

    0 Like
  • And Cineform - just as good as ProRes and DNxHD, properly cross-platform, but totally overshadowed since GoPro bought them. Good to see that the current version of Premier Pro CC has brought Cineform back into the mainstream workflow.

    0 Like
  • " When iMovie was released, Apple completely eliminated rendering."

    Hummm.., and I was under the impression that the article was inclined toward the professional side of video. I don't see the point in presenting a consumer editors capabilities as a competing factor over using DNxHD or ProRes in a multi layered professional editor.

    Regarding Mr. Marshall's comments:

    I hear you loud and clear. I was one of those "film school" (actually TV Production) instructors. And far from being insulted by your comment, I applaud it. Where as I tried to teach the virtues you mentioned it fell on deaf ears. Today the tail is wagging the dog. Students come in and have no respect for professional protocol. Slating, logging, have gone the way of punctuality and manners. They want to capture every worthless frame of video (as one enormous clip!) and then edit by exclusion. Maybe I'm just a throwback to the non-linear days. But I see contemplative forethought in editing by inclusion (from a select group of usable clips might I add).

    To Mr. Shapton:

    Thank you for your explanation. The myriad of codecs out there can often leave one feeling like they are looking at a food menu with 101 items written in a foreign language. Rather than explaining them all (and likely adding to the confusion) you have distilled it down with a reasonable purpose. For those from the educational side articles like this can be most helpful. We often worked using antiquated equipment with no incentive to learn/study about things we were not likely to acquire (at least anytime soon). It was often deemed a successful day just to to keep things functional for the duration of class. Going home at night and learning something that was not applicable at the moment would have taken time away from praying for another day of equipment functionality.

    0 Like
  • Great article David!
    Keep up the good work ;-)

    0 Like
David Shapton

David is the Editor In Chief of RedShark Publications. He's been a professional columnist and author since 1998, when he started writing for the European Music Technology magazine Sound on Sound. David has worked with professional digital audio and video for the last 25 years.

Twitter Feed