<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

The death of the dumb lens

Cooke's i/ technology has played a leading role in lens evolution
3 minute read
Cooke's i/ technology has played a leading role in lens evolution

With visual effects so commonplace today, a fast and foolproof means of reproducing lens characteristics in post production is needed: enter the smart lens.

What is a smart lens? It’s one that allows information like the T-stop, focal length, focal distance and depth of field to be recorded frame by frame as the camera is rolling. The information is transmitted to the camera via electronic pins in the lens mount, and recorded in the metadata of the video file. In post it can be accessed for use in visual effects work.

Handwritten camera logs by the 2nd AC provide some of this information, but it can never be as accurate or complete as that which the lens and camera themselves can record. Imagine a car chase where the 1st AC is continually adjusting the focal distance as the car moves around, the operator is crash-zooming in to grab certain moments, maybe the DIT is pulling the iris as the chase goes into a tunnel. Even a simple over-the-shoulder shot in a dialogue scene will have dynamic focus changes as the actor leans forwards or back. Lens data records all those changes as they happen and keeps them tied to the footage so that no-one has to guess what happened when.

Why is this useful? Let’s say a shape-changing alien robot is stomping after those speeding cars, or a steampunk airship is cruising past the window in the background of that over-the-shoulder shot. To render the correct bokeh on those elements, the VFX artists need to know the depth of field moment to moment. As the actor in that dialogue scene leans closer to camera and the 1st AC racks with her, the airship needs to get a little more out of focus. As the cars zoom into the tunnel and the DIT opens up two stops, the robot’s head needs to look a little softer compared to its reaching hand in the foreground. It’s subtle accuracy like this that makes the difference between a convincing VFX shot and one that leaves you knowing it looks fake without being able to put your finger on exactly why.

Two of the major lens data protocols for cinema lenses are Cooke /i and ARRI’s LDS, with the former having been licensed to other lens manufacturers including Zeiss and Fujinon. Zeiss has expanded the /i protocol to create eXtended Data (XD) which includes distortion and shading characteristics too.

All lenses, especially the quirky ones that are so beloved of DPs today, have “flaws” that need to be accounted for in VFX, most notably distortion – the bowing of lines particularly at the edges of wide lenses, and shading – vignetting or darkening of the image in the corners, especially at fast T-stops. These characteristics change with the focal length, focal distance and aperture. 

It has been common practice for the last decade or two to shoot a lens grid – a chequer board-like pattern – that reveals the distortion and shading of a lens. It’s ideally filmed for multiple stops on every lens used for VFX shots. The VFX team analyse this to create distortion and shading maps. Then the footage is undistorted and unshaded, the CG elements are composited in, and the whole thing is re-distorted and re-shaded.

Conventional lens data helps the VFX team use the right distortion and shading maps for each moment of a shot, but there is still some guesswork. If a lens is being pulled during a take from T5.6 to T2.8 then the maps might only exist for those two endpoints, because those were the only two stops which the lens grids were shot at. A frame at T4, say, will have to be an interpolated guess.

But with newer systems like Cooke’s /i3 or Zeiss’s CinCraft Mapper, the lens data unlocks frame-accurate distortion and/or shading maps, whatever the settings. In the case of CinCraft Mapper – an online portal which reads metadata from your footage and returns the maps in standard file formats – the maps are derived from the Zeiss factory’s own 3D models of their lenses. According  to Cooke their maps are apparently specific to every individual lens, determined by the serial number.

It’s not just post-production VFX that benefits from lens data. It’s essential in virtual production to tie the images on the LED wall accurately to the foreground, it can be handy for a DP looking back at last year’s footage that they now need to shoot pick-ups for, and it’s very useful for ACs too. With the right equipment, like a Lens Display Unit, a 1st AC can see in real-time the lens’s depth of field and whether their subject is within it. There is no doubt that the future of lenses is smart.

Tags: Post & VFX Production

Comments