RedShark News - Video technology news and analysis

RedShark Summer Replay: Slow Motion Explained, Quickly

Written by Phil Rhodes | Sep 2, 2013 2:00:00 PM
Vision Research

Redshark's only 10 months old, and our readership is growing all the time. So if you're a new arrival here you'll have missed some great articles from earlier in the year

These RedShark articles are too good to waste! So we're re-publishing them one per day for the next two weeks, under the banner "RedShark Summer Replay".

Here's today's Replay:

 

Slow Motion Explained, Quickly

"Slow Motion"  sounds pretty simple, doesn't it?  But the terminology is counter-intuitive – I mean, “high speed” camerawork makes things appear to move slowly? It's not the most obvious choice of words. Then again, we also have techniques to make things appear to move quickly, and we call that “under-cranking”, which is less than the most obvious way to describe what's going on, especially as cameras haven't routinely been hand-cranked for decades.

That said, it isn't a difficult technique to understand. To view any event in slow motion we can simply play back the frames of film or video footage at a slower speed than they were recorded, and that's possible no matter what speed they were shot at. Most people will be aware, though, that recording a scene at 24 frames per second and playing it back at six might give you a slow motion effect in which everything takes four times longer to occur, but it won't give you an illusion of fluid motion.

Speeding up to slow down

Even if we take 24fps as our benchmark for smooth motion it's clear that we can't produce convincing, fluid slow motion simply by slowing down playback; instead, we need to shoot more frames to begin with, which is where the phrase high speed comes from. A camera such as the Sony FS700 will shoot at 240fps which, when played back at 24fps, results in things moving at one tenth the usual speed.

So far, so straightforward, but it's this requirement to record scenes at very high framerates that makes this an interesting issue. It wasn't long ago that merely shooting HD resolutions at normal frame rates was a technological challenge, so it should be clear that doing the same thing at high or very high frame rates can still present problems that are only solvable with technology that's exotic and rare enough to be expensive. Back when this sort of work could only be done on film, at least at resolutions suitable for theatrical release, costs were ferocious, with 35mm film stock thundering through the gate at a rate sufficient to make the production accountant whimper like a frightened puppy.

Varicam

The very first device directed at filmmakers which was capable of shooting video at rates significantly in excess of those used for broadcast TV was probably the Panasonic Varicam. While limited to 720p resolution, it offered rates up to 60fps, suggesting it was aimed at sports coverage, which is often broadcast at 720p60. This was useful for music videos but not much to write home about if you happen to be making a sequel to The Matrix. Panasonic's later HVX-200 camera was widely viewed as the affordable, low-end equivalent to the Varicam, and was widely purchased for its 60fps capabilities even though it was otherwise not famous for producing excellent images.

Now, the FS700 and the Canon C500 are among the first non-film cameras to offer what we might call proper slow motion in a package that costs less than a very nice car, and we can probably expect to see a rash of slow motion on Vimeo over the next year or two, much as very shallow depth of field was perhaps a bit more popular than it should have been after the Canon 5D mark 2 became popular.

 


Seriously high speed

Still, these cameras don't obviate the high end options, providing up to a few hundred frames per second, but not more, at least not without compromising other factors of image quality. Companies like Redlake and Vision Research originally served scientific and industrial customers, providing high speed cameras to observe experiments and allow machines to see things. The first electronic high speed photography was shot on repurposed industrial cameras, although the companies quickly realised that a market existed and now offer devices specifically tailored to the requirements of cinematography. These will shoot at up to a couple of thousand frames per second, which is the point at which things like wings flapping on insects and speeding bullets become visible.

The requirements for very high speed electronics, fast memory to record the resulting images, and extremely low noise image sensors to capture a decent picture with only very brief exposure to light  make these cameras expensive. They are invariably made available by the day, at a cost which would pay for an FS700 in less than a week, but they if you need more than 240fps, that's the choice that's available.

Help from software

Or, well, not quite. You can, regardless of the original recording rate, use a software solution such as Twixtor or the inbuilt time remapper in After Effects. Both of these use optical flow interpolation, a technique related to motion compensation in video compression which attempts to intelligently identify areas of motion, then apply interpolation to approximate the movement between frames. Optical flow can work astonishingly well, but it's very dependent on the subject.Transparent objects, such as smoke, hair or glass, as well as objects like a chain-link fence, can cause serious problems when the software tries to identify where a part of the image is moving, since an area of the image obscured by a transparent object may appear to be moving in two directions at once.

These problems can be minimised by careful shooting. People have even shot subjects for later optical flow treatment against a green screen, so that the software does not have to deal with a multiple-layer image with possibly contradictory motion. Shooting at a high shutter speed – perhaps the shutter speed implied by the frame rate at which the shot will eventually be delivered -  gives the software sharper edges to work with, and simply staging shots such that complex objects do not obscure complex background will help. And finally, shooting on a moderately high-speed camera such as an FS700, then interpolating to even higher speeds, gives the software far less work to do.

Morphing

The technique used to estimate the motion of objects between frames is effectively morphing, so the very best subjects are those which will obscure or excuse the slight feeling of warped motion that the technique can create. Optical flow works particularly well on pyrotechnic fireballs shot against a black background, although inconveniently it will often fail badly on a dense cloud of sparks, given the difficulty of identifying individual sparks in order to recognise them from frame to frame and track their individual motion. Some software provides the option to do so manually, and to draw masks to isolate layers, but it can be extremely labour-intensive.

Given the software like Twixtor and the very recent availability of much more affordable high speed electronic cameras, it's never been easier to create attractive slow motion photography. What worries me is that it'll become a fashion statement just like super-shallow depth of field. Diluting the technique by making it clear that even the smallest productions can now afford it devalues it for everyone, so if this article does nothing else I'd still be pleased if it encouraged people to use a light touch on what Sony call the Slow and Quick Motion button.