RedShark News - Video technology news and analysis

How clever should we let cameras get?

Written by Roland Denning | Jan 19, 2022 12:00:00 AM

Replay: Cameras are getting cleverer by the year. But at what point do we draw the line between automation and the discretion of the camera operator?

For many years, I rejected the whole idea of autofocus. If there wasn’t a focus puller or assistant to hand, my left hand muscles were so accustomed to those twists of the lens barrel I didn’t really need to think about it. I owned cameras that had an autofocus function but never bothered to learn how to switch it on. Autofocus seemed strictly for amateurs. 

But things change – resolutions now exceed the capabilities of most viewfinders and, let’s be honest, eyesight deteriorates with age. And, crucially, autofocus has now got really good. It’s usable. And for documentary work, it can be invaluable.

Professionals want maximum control, amateurs want the best pictures possible with the minimum of fuss. Maybe those two goals aren’t too far apart; professionals also want the best pictures with the minimum of fuss, they just want to define what ‘best’ is.

There is, perhaps, some nostalgic attachment, or perhaps an element of snobbery, to the notion that if we are pros, we should do the maximum manually – that’s the proper way to do things.  But I believe we are at a turning point; more automation and computation is inevitably going to enter the professional work flow.

The needs of stills and video are different

The issue is complicated by the influx of users and technology from the stills world; not all the features of stills cameras are so relevant to video. Whereas in stills the aim is to optimise each individual shot, in the moving image we want to maintain consistency throughout out a scene. Auto exposure, for example, while fully accepted in the stills world makes little sense in video where the brightness of our subject is constantly changing as it moves.

Stabilisation is another feature associated with consumer cameras, and stills cameras in particular, which is absent from most high end professional video cameras. It is not a substitute for a gimbal or a Steadicam, but it can give lightweight cameras the stability of well balanced, heavier cameras. But we are beginning to see it at the high-end and maybe the future of stabilisation is the combination of in-camera data and post-production software as featured in the Sony FX9.

These sort of proprietary technologies and generally restricted to the large Japanese companies – Sony, Canon and Panasonic – which can spread the development of their technologies from mass market stills to high end cinema cameras. Unfortunately it puts the smaller, maverick manufacturers at a great disadvantage.


Steadicam is still one of the best way for getting stable image, but in-camera stabilisation is getting ever closer.

The focus puller is not yet obsolete

At the drama end of things, the operator is never going to switch everything to auto. It is hard to imagine autofocus ever being used on a feature since getting that perfect pull depends on so many subtle, intuitive and variable factors. Sophisticated focus assist, however, could be the way forward.

There is, for example, the Preston Light Ranger 2, a fairly elaborate (and expensive) system that uses its own infrared scanner on top of the camera to superimpose focus guides on the focus puller’s monitor. It interfaces with Preston’s own wireless controls, so this is strictly for the high-end.  The scanners (there are two, to cover the range of lenses in common use) also need to be calibrated to the camera and lens. It’s fair to say the system is not yet in common use.

It is not difficult to imagine a future camera-derived touch screen system combined, perhaps with a traditional follow focus control which could go seamlessly from focus assist to fully auto, maintaining focus even if actors consistently miss their marks.

Currently there is little incentive to develop such system from the point of view of the manufacturers (and yes, it would involve a different approach to lens design); the people who would gain from such innovations (the focus pullers) are not those who actually buy the cameras, and from the producers point of view, it wouldn’t save any money unless it made the focus puller redundant. Which it won’t.


MK zooms are pricey, but electronic in-camera zoom systems like Sony's Clear Image system are now incredibly good for getting extra range.

When will lenses join the 21st century?

Lenses are one part of our current processes that haven’t changed much over the decades – and that’s why we love them. There is an appeal to beautifully hand-made, analogue objects in the digital age, so we put rehoused glass from another century on the front of state-of -the-art digital machine. I detect an anomaly here; but one that is totally understandable, even if it not totally rational.

In the documentary world, a lens like the Sony 18-110mm presents us with a 6 to 1, S35 zoom which is effectively parfocal, with excellent tracking, low distortion and minimum breathing.  I say ‘effectively’ parfocal, as this and the other features of the lens are only possible through electro/mechanical and digital compensation. If you want a lens that behaves ‘naturally’ (genuinely parfocal and fully manual) you could go for the rather lovely Fujinon MK zooms (which are also a stop faster).

The problem is, not only are they over three and half times more expensive, but you need two of them to cover the same range.  Whatever the improvement in image quality on the screen (and let’s be honest, it’s marginal) it is outweighed by having to change lenses in what otherwise would be one shot. Sony’s Clear Image Zoom, which increases your zoom range by 1.5 to 2 times, is another electronic lens ‘enhancement’. You may, understandably be suspicious of it,  but work it does – and extremely well. 

Lens emulation in post, isn’t really a thing yet, probably because we love those chunky lenses too much. But please don’t tell me it’s not feasible, as I simply won’t believe you. In the future I believe we will have the ability to dial-in the quirkiest of lens characteristics – flare, bokeh, distortion, diffusion, smoothness, definition, contrast – with the advantage of being to change our minds. There is an argument that goes that the ‘character’ of vintage lenses offsets the ‘clinical’ nature of the digital image, but I would put forward a contrary argument: if we want to shoot raw to give the maximum options in post, why don’t we go for the most neutral, accurate lenses for the same reason?