<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

What is the "cinema feel"?

8 minute read

Shutter stock

RedShark Christmas Replay: There's a quality to the cinema experience that 'cinema feel' people discuss in relation to depth of field, grain, contrast and other image characteristics, all of which have changed significantly with digitalisation. But why? Guest Author John Clark delves into frame rates and the science of perception.

As well as all the characteristics above, there is another another contributory factor where digitalisation means fundamental change and that is frame rate, or rather the lack of it. Unlike the director, or editor, who are eager, or exhausted by their attention to detail in post production, the typical cinema-goer is there to sit back and enjoy high-end production, knowing they are neither threatened, nor involved. Relaxation is important when enjoying movies — the thrills, the visuals, or giving thought to implications of character and plot. No-one wants to experience the real terror of being shot at, or getting caught in a car crash, though the inverse problem might explain why so many erotic sequences fail to hit the spot.

Digital projection is sometimes seen as being 'too smooth' to quote a British producer who considered introducing grain into a final cut of a high-end cinema film, while Peter Jackson's high-frame rate shoot at 48fps for 'The Hobbit' brought criticism for yielding video like images.
A champion of 'immersive cinema', cinematographer Douglas Trumbull ( douglastrumbull.com ) set up some large scale tests when developing 'Showscan' in the 1980s and 90s to explore the options of film production and projection with different frame rates. He has expressed the interesting opinion that movies might be created with variable frame rates, to match the mood of the film with the experience of the audience, who are subliminally sensitive to frame rate. The simple assumption that more is better, may not turn out to be the case.

We still tend to assume that the camera somehow mimics the eye and brain, while a projector is basically a camera in reverse, which was indeed sometimes the case in early cinema and for amateur film-makers who could adapt their cameras to project their film after processing. However, for Digital Cinema the Digital Light Projector ticks along at 144fps, whatever the official frame rate of the movie, with individual frames described as being 'flashed' more than once to fill the available time (though the very concept of 'frame rate' and 'flashing' shouldn't really be applied to 3-chip DLP's, whose 'micro-mirrors' oscillate 50,000 times a second with information up-dated in a modulated form to change each mirrors' pitch). This remarkable technical achievement is quite different to the traditional 'maltese cross' light/dark approach to match the stop/start motion of a print as it moves through the projector, or the scanned lines of a monitor. The notion of 'persistence of vision' building a perceptual bridge between one frame, or half-frame and the next need no longer apply.

When the Digital Cinema Project was defined, an obvious priority was to see that the new system ensures the massive libraries of movies shot at 24fps can still be seen, and using JPEG2000 for DCP files creates high quality archive material for the future. A format war was in no-one's interest, when billions were needed to re-equip cinemas around the world, but equipment manufacturers like Christie are adamant that higher frame rates are on their way, which implies yet another wave of re-equipping for production and post.

What frame rates and why?

The potential multiples for higher frame rates are fairly obvious, from 144fps to 2x72, or 3x48 and 6x24 for 3-chip projectors, so the 96/second falls away as an option, unless the images are packaged as a fraction of 2 minutes rather than 1, ie 288 frames at 3x96 (and why not?). The situation is complicated by the need for single chip projectors using a colour wheel to project R, G and B successively, which limits them to 24, or 48 (and perhaps 96). Adaptations now mean material shot for 25/50fps and 30/60fps can be projected, but with over 100,000 projectors now in use, there is little likelihood that this basic 144fps set up will change for many years.

Anyone seeking to work with variable frame rates a la Trumbull could construct their digital prints to reflect the 144fps standard and might vary the number of images by including single frames, or doubling up as appropriate frame by frame, rather than double, or triple 'flashing' the same 'frame'. The 3-chip projector technology has been limited by the bandwidth available for data transfer, rather than any inherent need for frames to be repeated. With 4 million micro-mirrors co-ordinated on a 4k chip this is no small challenge, however high capacity cabling is becoming the norm, so that limitation should fall away.

The original standardisation of film around 24fps was driven by the introduction of optical sound for projection at acceptable quality and the use of double and quadruple flash in movie projectors has long meant that they are geared to 48, or 96 flashes per second of 24 frames. TV and video were pragmatically synched to the prevailing local AC electricity supplies and there wasn't much change until the arrival of progressive video on computers using lower, or higher rates as required. These standards were all adapted for good engineering reasons and economy, but only limited attention was given to aesthetics, or perception. One way, or another, they've worked pretty well and we're attuned to their feel.

It is easy to presume that higher frame rates bring an automatic improvement in quality by overcoming some traditional irritations like flicker, but they make different demands on our perception. The symmetry between camera and projector has been broken. Any change will probably come from the camera side and post-production, which brings us back to Trumbull's notion of a relationship between frame rate and audience response.

More than a feeling

To get a little closer to the question of 'cinema feel', its worth reminding ourselves that our brain is not just concerned with seeing, hearing and interpreting a movie. All kinds of basic bodily activities are co-ordinated at the same time, and alongside all the nerve signals and specialist cell responses, the brain hosts a mass of chemical operations, with fountains of chemicals flooding though it at various rates according to the demands being made on it. With nerves passing information throughout the body at a rate of between 1 and 1000 times per second, there is a massive task of synchronisation, though we lack a single organ which functions like a clock.

The visual pathway from retina through the brain to the primary visual cortex was described by David Hubel which won him a Nobel Prize in 1981 and his milestone book, 'Eye, Brain, and Vision' is available chapter by chapter on the Harvard Medical School website. Checking with someone who ought to know, experimental psychologist Niko Busch, a professor in Berlin's research group 'Mind and Brain', he confirmed that Hubel's account stands even though his work represented research from the 1950ss onwards.

Busch ( http://oszillab.net ) has been involved with various studies of flicker and its influence on the brain, using standard psychological testing methods and electroencephalography (EEG) to accurately assess the electronic impulses arising in the brain due to flicker and other short term events. One outcome was that the brain continues to respond to flicker at frequencies higher than those where the observers have ceased to be aware of it. People's awareness of flicker varies and in his test group only about 15% were noticing flicker at rates above 50Hz and none above 55Hz, while everyone's brains were registering a response up to 100Hz and an outlying response at about 160Hz.

Equally surprising, people also judge duration differently. Under flicker there is a tendency to overestimate the passage of time compared to a situation without flicker. In another experiment, sudden 'saccidic' eye movements were stimulated by distracting the viewer and this impaired their ability of detect changes in images shown before and after the event, say an object within a photograph, where a bit of photoshopping had changed its colour or position, or removed it completely from the image, though the observers weren't necessarily aware of any eye movement. This might be good news for props and continuity people, whose errors might be overlooked, but it isn't so encouraging if a subtle change is crucial to the plot.

While it would be misleading to imagine that the brain sees images in discrete frames, or has a particular frame rate, Busch mentioned that the rate of nerve impulses slows in different regions of the brain and that in the areas dealing with higher level functions this is only around 10 per second, which I found surprisingly low. Electrical activity in the brain is much more complicated than the intuitive assumption that low levels of activity are associated with sleep and frequencies simply increase with our level of awareness and attention. There are a number of other factors that might help us appreciate the complexity of visual perception.


Photons and nerve signals

For the 'dark adapted eye', the number of photons required to trigger an impulse giving rise to a visual sensation is astonishing low, a few thousand photons per second will suffice, a phenomenon that makes it possible for us to see the stars. But even the very notion of a photon is ambiguous. Though now accorded the status of an elementary particle, photons lack mass, or charge, but carry 'spin'. Little more than a twirl in space, they nevertheless sustain their consistency across astronomical scales until they encounter something like the human eye, at which point the light sensitive molecule rhodopsin responds by changing its geometric structure between the left and right handed form taking several seconds to revert to the original state. This sensitivity is even more surprising, after taking into account the 'noise' generated within the eye, which needs to be filtered and ignored, or the process of image stabilisation required to see the stars without putting our head in a clamp, or novocaining the muscles around the eyeball.

Unlike other nerve signals, Hubel points out that the signals from the retina are analogue as they arise and vary in amplitude depending on the strength of the stimulus, though that analogue quality is quickly replaced by standard nerve signals as information progresses along the visual pathway. At the back of the brain, within 40msecs, (1/25second), elements of line and form begin to be defined, but the visual pathway subsequently divides, so that signals from both eyes referring to the left side of the visual field are sent to the right brain hemisphere and impulses from both eyes referring to the right side of the field of view are sent to the left hemisphere via two bodies called the lateral geniculate nucleus (LGN). Effectively, one half of the brain is receiving one half of the image seen by both eyes, but only about two thirds of the resulting impression is the product of binocular vision with information from both eyes.

Our field of view is normally almost 180°, but the shape of our heads, especially the nose, means that each eye can only cover about 150°. About 30° is only visible via one eye, even when we swivel the eyeball to look from side to side. Looking straight ahead, this is also affected by the density of cells in the retina distinguishing peripheral and central vision, which might help account for the exaggerated impression of depth familiar from stereoscopic photographs, compared to our usually milder sense of depth in normal vision.

The LGN is also involved in identifying objects and our ability to concentrate on elements within the field of view. As well as sending information on, it also receives feedback from the cerebral cortex. Even then, perception isn't based on a simple continuous stream of data. For example, when the retinal image is blurred due to head, or eye movement, the whole system can stall for up to a quarter of a second, holding on to the last retinal impression of the visual field. The blur is ignored before the next clear impression is formed. In this sense, there is a 'historic' aspect to visual perception.

Watching a movie, whether in the cinema, or on a monitor is really quite different to normal sight. This has potentially serious implications for immersive cinema and 3D, though a lot depends on the size of your nose.

The 'immersive' cinema concept derives from the huge curved screens installed for systems like IMAX, Cinerama, or Vistavision, where an image might be projected onto a screen 100ft wide and many members of the audience do not see the edge of frame on either side. A deep curve enables the audience to see left and right without their eyes having to make many focus adjustments, by equalising the distance between centre frame and the left and right edges. (Though projectionists often find difficulties keeping focus, a different story).

Taking into account the left-right division of the visual pathway, if we consider a viewer using 3D goggles providing alternative left-right frames, then at 24fps the visual pathway is handling signals comprising about two-thirds information from both eyes and one-third from a single eye at 12fps for each left or right frame of the 3D DCP, whether the projector is double flashing or not. From a design perspective this is a bit of a headache and a headache is what many people get. That might clearly indicate the desirability of a 48fps system, but that is still lower than the threshold at which some people notice flicker. Even for 2D immersive cinema, the 48fps option could see the reintroduction of flicker due to the monocular/binocular distinction, even though the DLP has eliminated moments of darkness. There's more to what you get than what you see.

The 'cinema feel' of 35mm has probably gone forever, (if it ever really existed outside the nostalgic minds of enthusiasts), but it might be worthwhile to fix a few electrodes to EEG the audience as they watch some test films, to discover whether the technology is encouraging, or hindering appropriate audience response to genre, or emotion.

Over the last couple of decades, movie-makers have come to terms with wave after wave of new cameras, editing systems and formats, alongside all the ancillary equipment, from batteries to cables to nuts and bolts, which that expensively involves. Rather than dash towards yet a new technical horizon, it might be worth exploring what the audience perceives if the aesthetics of movies based on the new technologies are really to succeed. Then production equipment might be manufactured to match verifiable aesthetic goals and not just price, or the often questionable claims to match some arbitrary engineering standard.

Tags: Post & VFX

Comments