<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Things you actually should fix in post

It'll buff out. Pic:
4 minute read
It'll buff out. Pic: Shutterstock

“Fix it in post” is a meme-worthy phrase for correcting things that have gone awry on a slipshod set, but there are things that can and even should be done once the cameras have stopped rolling.

If only we had the broken-off parts of the Rosetta Stone, we’d be able to accurately date the origins of the term “fix it in post.” The idea of leaving a mess for someone else to clean up is one that modern production practice seems keen to rehabilitate, though, as equipment works harder and harder to defer as many decisions as possible to circumstances that don’t cost five figures a day. Today, then, and in honour of post month, we’re going to consider the things you can fix in post, or even the things you should fix in post.

The machinations of the EU aside, it’s hard to say that there’s a huge consumer rush for beyond-4K delivery anywhere other than theme park rides. That could certainly change if someone came up with a good enough use for all those pixels, but even if anyone did manage to popularise 8K delivery, we’d need even higher resolution cameras to keep the likes of David Fincher happy in his quest to clean up his framing in post. Happily, such things exist, though even the manufacturer might be at pains to clarify that the idea of really high resolution cameras is not necessarily really high resolution images.

So framing is a trivial post fix. It’s a shame, in many ways, that cameras generally lack ways to record large images while monitoring smaller ones, or any way to pass that cropping decision on to post in a way that’s widely supported. It would be trivial to implement, and yet it’s almost unheard-of. Otherwise, in principle, reframing is one of those rare things that generally works exactly as expected. In extremis, it’s possible for many cameras to out-resolve their lenses in specific circumstances, and in any case optical low-pass filtering is inevitably a compromise which prevents most cameras from filling their output images with information to the Nyquist limit. Downscaling can create pictures which do push that limit, so less downscaling on certain shots can conceivably be visible, even when, on paper, there are plenty of pixels.

Relying on raw

All that requires is more pixels to begin with, though, which is an almost inevitable concomitant of twenty-first century camera development. Many of the other things which have become fixable in post have relied on raw recording, a technology more or less specifically intended to bring more aspects of the image under post production control. Whether or not the average camera specialist thinks that’s a good thing is down to the reliability of the post people involved, but things like colour balance are, formally-speaking, only formally fixable when the material has been shot raw.

The word “formally” is weaselly, there, because much can be done to fix broken colour in almost any reasonable picture, especially with log recordings, and we’re only discussing colour balance because it’s somewhat talismanic of what raw is good for. Theoretically, colour balance is simply a matter of scaling the red, green and blue data such that a colourless object is colourless. The problem is, doing it that way only works with a linear representation of brightness, that is, when the numbers we’re scaling directly represent the number of photons which hit the sensor, or at least are directly relatable to that number. Even in log modes, this is very rarely the case, and the actual brightness response of most cameras involves the opinion of an engineer. That’s why different cameras are pretty in different ways, and it’s not something manufacturers are inclined to give away.

In the spirit of balance and fairness, it’s worth being clear that the meaning of the word “raw” has been stretched and redefined to mean a wide variety of things, from “sashimi” to “medium rare,” but they’re all helpful in terms of deferring expensive decisions to the less-expensive world of a grading or editing suite. There are, however, things that raw can’t help us fix. 

Motion blur

Motion blur, for instance, means that in theory, any post process that moves the frame around between frames, particularly if it moves the frame quickly, whether for animated reframing or stabilisation, could be said to be inaccurate. If something’s moving ten pixels vertically in the original frame, it’ll be vertically blurred by ten pixels. Move it ten pixels sideways in post, and theoretically it should now be motion blurred diagonally, but it won’t be – it’ll now be a square of blur. This will be a problem familiar to anyone who’s tried to track library footage of pyrotechnic sparks into a particularly violent camera move, and noticed that the sparks stop being fine streaks and start becoming parallelograms or boxes.

With gentle operating, motion blur will be subtle enough that this isn’t a big deal, but this particular consideration tells us something else, too: that we should ideally all be shooting everything at a thousand frames per second, with the tiniest-possible exposure time. Do that, and even more camera controls can be deferred to post with minimal compromise. The desired frame rate can be altered, variable shutter timing (for Saving Private Ryan-style effects) simulated, and post production motion can be summed with actual camera movement to create accurate motion in composites. That’s not currently possible unless we’re really, really well-funded, but it’s not unreasonable to assume that the white heat of camera development might make things easier. It’ll also make companies like Nvidia, AMD and now Intel very happy as their huge-bandwidth devices find application in manipulating some of the most basic aspects of traditional camerawork.

Where that leaves on-set monitoring is another matter, but what it does tell us is what most people already know: maximum fixability in post depends on camera performance and post integration. If there’s anything missing, it’s the perpetual lack of standardisation between those things. Camera progress is almost inevitable, given the need of manufacturers to sell a specification; connecting that to a series of nice, easy buttons in the edit suite might be rather harder work.

Unless you make both, of course.

Tags: Post & VFX

Comments