Photons and nerve signals
For the 'dark adapted eye', the number of photons required to trigger an impulse giving rise to a visual sensation is astonishing low, a few thousand photons per second will suffice, a phenomenon that makes it possible for us to see the stars. But even the very notion of a photon is ambiguous. Though now accorded the status of an elementary particle, photons lack mass, or charge, but carry 'spin'. Little more than a twirl in space, they nevertheless sustain their consistency across astronomical scales until they encounter something like the human eye, at which point the light sensitive molecule rhodopsin responds by changing its geometric structure between the left and right handed form taking several seconds to revert to the original state. This sensitivity is even more surprising, after taking into account the 'noise' generated within the eye, which needs to be filtered and ignored, or the process of image stabilisation required to see the stars without putting our head in a clamp, or novocaining the muscles around the eyeball.
Unlike other nerve signals, Hubel points out that the signals from the retina are analogue as they arise and vary in amplitude depending on the strength of the stimulus, though that analogue quality is quickly replaced by standard nerve signals as information progresses along the visual pathway. At the back of the brain, within 40msecs, (1/25second), elements of line and form begin to be defined, but the visual pathway subsequently divides, so that signals from both eyes referring to the left side of the visual field are sent to the right brain hemisphere and impulses from both eyes referring to the right side of the field of view are sent to the left hemisphere via two bodies called the lateral geniculate nucleus (LGN). Effectively, one half of the brain is receiving one half of the image seen by both eyes, but only about two thirds of the resulting impression is the product of binocular vision with information from both eyes.
Our field of view is normally almost 180°, but the shape of our heads, especially the nose, means that each eye can only cover about 150°. About 30° is only visible via one eye, even when we swivel the eyeball to look from side to side. Looking straight ahead, this is also affected by the density of cells in the retina distinguishing peripheral and central vision, which might help account for the exaggerated impression of depth familiar from stereoscopic photographs, compared to our usually milder sense of depth in normal vision.
The LGN is also involved in identifying objects and our ability to concentrate on elements within the field of view. As well as sending information on, it also receives feedback from the cerebral cortex. Even then, perception isn't based on a simple continuous stream of data. For example, when the retinal image is blurred due to head, or eye movement, the whole system can stall for up to a quarter of a second, holding on to the last retinal impression of the visual field. The blur is ignored before the next clear impression is formed. In this sense, there is a 'historic' aspect to visual perception.
Watching a movie, whether in the cinema, or on a monitor is really quite different to normal sight. This has potentially serious implications for immersive cinema and 3D, though a lot depends on the size of your nose.
The 'immersive' cinema concept derives from the huge curved screens installed for systems like IMAX, Cinerama, or Vistavision, where an image might be projected onto a screen 100ft wide and many members of the audience do not see the edge of frame on either side. A deep curve enables the audience to see left and right without their eyes having to make many focus adjustments, by equalising the distance between centre frame and the left and right edges. (Though projectionists often find difficulties keeping focus, a different story).
Taking into account the left-right division of the visual pathway, if we consider a viewer using 3D goggles providing alternative left-right frames, then at 24fps the visual pathway is handling signals comprising about two-thirds information from both eyes and one-third from a single eye at 12fps for each left or right frame of the 3D DCP, whether the projector is double flashing or not. From a design perspective this is a bit of a headache and a headache is what many people get. That might clearly indicate the desirability of a 48fps system, but that is still lower than the threshold at which some people notice flicker. Even for 2D immersive cinema, the 48fps option could see the reintroduction of flicker due to the monocular/binocular distinction, even though the DLP has eliminated moments of darkness. There's more to what you get than what you see.
The 'cinema feel' of 35mm has probably gone forever, (if it ever really existed outside the nostalgic minds of enthusiasts), but it might be worthwhile to fix a few electrodes to EEG the audience as they watch some test films, to discover whether the technology is encouraging, or hindering appropriate audience response to genre, or emotion.
Over the last couple of decades, movie-makers have come to terms with wave after wave of new cameras, editing systems and formats, alongside all the ancillary equipment, from batteries to cables to nuts and bolts, which that expensively involves. Rather than dash towards yet a new technical horizon, it might be worth exploring what the audience perceives if the aesthetics of movies based on the new technologies are really to succeed. Then production equipment might be manufactured to match verifiable aesthetic goals and not just price, or the often questionable claims to match some arbitrary engineering standard.