<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

How the brain finds meaning in images (and whether we need pictures at all)

4 minute read

John Sullivan / RedShark NewsBrain and Gorgonzola

RedShark Technical Editor Phil Rhodes explores how the brain conjures understanding from visual stimuli and ponders the question: will our technological obsession with replicating reality ultimately eliminate our need to 'see' images?

At some point, we have to ask whether the ideal in cinema (or even specifically cinematography) is to recreate reality. Asked baldly like that, many would demur. But more resolution, greater dynamic range, wider colour gamut, higher frame rate and stereoscopy are all things designed to make the experience of watching a film or television production more like the experience of watching a real scene, and this is something that's been pursued for some time.

Douglas Trumbull's Showscan system went for higher frame rate and resolution in the late 70s, by shooting 65mm negative at 60 frames per second. The costs were enormous, of course, but the pictures beautiful, and while the system never really took off, it's just one example of many. Okay, to some extent these things are done for commercial reasons, rather than a simple desire to provide a better experience to the audience, but that's producer-thinking. As technicians, engineers and cameramen, then, to what extent should it be our goal to provide the best possible simulation of reality?

Even considered solely as a financial imperative, the high-frame-rate photography of The Hobbit certainly didn't meet with universal approval, and there seems no immediate appetite to try it again even though is objectively a better simulation of reality than the traditional approach. All of this bears on our deeper understanding of what it means to observe a scene, especially in the context of a wider knowledge of what the scene represents in terms of its characterisation and place in a story. These are things which depend on the unimaginably complex mechanisms of a human psychology. Working behind a camera, we're just playing with patterns of light, but we're trying to trigger (or contribute to triggering, at least) a fairly specific reaction in a human brain.

The Brain as Image Processor

In this way, we're going in at the ground floor. We're relying on all of the capabilities of the brain to interpret visual data, from simple shape and pattern recognition, through recognition of an object, to association of that object with meaning. This works whether the object is a human being, with all the complex associations that has, or a small piece of gorgonzola – as if that's really any simpler from a perceptual standpoint. Try writing a computer program that can tell gorgonzola from cheddar, for instance, based on a picture of it.

The fact that these associations continue to work even when the visual information they're based on is as abstract as a projected film image is testament to the flexibility of the brain. When we observe a projected image, after a while, our awareness of its nature as a projected rectangle of light fades, and we concentrate on the reality of what the picture represents. It's a mistake to assume that the human visual system isn't used to two-dimensional scenes – anything more than a few tens of feet away from a human has effectively zero stereoscopy anyway – but it certainly didn't evolve to deal with the pseudomotion of pictures juddering by a 24 updates per second, with strange colorimetry and brightness compression. The fact that we're capable of recognising, say, a melon when it's depicted on screen is arguably comparable to our ability to understand fuzzy telephone conversations and decode from them the speaker's meaning and emotional state. Either way, as the past century of filmmaking attests, we don't actually need better visual data to make emotionally satisfying connections with the real world concepts that data represents.


Pictures...without pictures?

To take this a stage further towards complete realism, the idea occurs that we could go in at a higher level, avoiding the ground-floor entry point of projecting a pattern of light and allowing the brain to interpret it. Could we, in theory, skip over the pattern recognition and association phases of recognition and impose an impression of a real-world object on a human brain without using a picture? Presumably it's possible; presumably applying a pattern of tiny electric current to the right parts of the brain could conjure up the idea of a melon in someone's head without showing them a picture of a melon at all. Let's assume we're doing this with some sort of science-fiction field projection, to avoid the unpleasant concept of sticking electrodes into someone's brain as a prerequisite for the enjoyment of this particular hypothetical artform. You might even be able to tell a story using these techniques.

There's two objections. The first objection is that when we watch a film, as we've already discussed, we're not particularly aware of the artifice anyway. The process is widely referred to as suspension of disbelief, but one might argue that phrase trivialises a much more complicated process. Commercials, for instance, often work hard to make us not only suspend our disbelief – the product does exist, after all, and we can go and buy it – but to give us a positive impression of the nature of the product and its characteristics, ideally in a much more powerful sense than simply the idea that there's a chocolate bar in the room where there's actually only a television. Could a direct intervention in the human brain hope to produce a more powerful impression of chocolate-barness than even the best-judged commercial? I guess, maybe, but it's far from certain that such a subtle and difficult approach would have any better results than what we have right now – and we'll have to figure out how to actually do it before we can run an appropriate study.

But the main objection is far less esoteric. It is this: radio did not make books obsolete. Television did not make radio (or theatre, or film) obsolete. People still ride horses, despite cars. Photography for film and television may on one level be a tool for programming people's brains to understand a place, or characters, or a story, but there's a strong argument that on other levels it is something that people value for its own sake. That 24-frame, comparatively low-resolution, limited dynamic range picture is something we can reasonably expect to endure, even if other things – things which create an objectively more accurate simulation of reality – become commonplace as well.

Naturally, the purpose of this is not to argue against invasive cerebral-intervention brain probes that don't exist, but it might be a useful thought in the context of the current and apparently neverending push for greater and greater fidelity that people have now twice, regarding stereoscopic 3D and high frame rate, made it pretty clear they just don't want.

Tags: Technology

Comments