Can a recording of actual or computer generated subjects ever appear as realistic as how they would appear if they were actually in front of us? Gary Hango charts the challenges.
Stereo 3D, High Frame Rate, even Holography... have all been touted as the Holy Grail of film/video and virtual reality perfection. We muse about someday actually stepping into Star Trek’s “Holodeck”. Much time, sweat and treasure has, and is, being spent in the search to create and present the most realistic, immersive images to an increasingly demanding public. But can a recording of actual or computer generated subjects ever appear as realistic as how they would appear if they were actually in front of us?
A photograph can look very realistic but we immediately know it's a photograph because it's only two dimensional. Show us a 3D photo or movie and we might be deceived into thinking it's real, but try as we might, we can’t bring into focus objects that were not recorded in focus. Since the technology now exists where we can change the focus of a picture in post, let’s say in the near future you could buy a virtual reality device that combines all this with the ability to track your eye movement and change the focus of the images depending on what you are looking at. Now that would be jaw dropping.
With the right subject matter we just might be deceived into thinking it’s real, but it would still just be two projections of still images shown every 1/24th or 1/48th of a second. Adding even higher frame rates in an attempt to reproduce reality is, again, just presenting frozen images captured in ever decreasing moments in time one after the other. No matter how high the frame rate is that is used to capture and playback a scene, it is my assertion it will never reach the point where we couldn't distinguish it from reality as soon as anything moved, and what good are movies without motion?
The reason for this is biology. Our vision is not based on frame rates. Though the impulses that travel through nerves to the brain from each rod or cone on our retina are serial, taking milliseconds to reach the brain, there is no set pattern or orderly sequence that each impulse follows. It's random. In essence, our eyes send a continuous stream of millions of light variations to our brain, which it in turn melds into a continuous stream of images.
In analogy to a video camera, it would be as if the camera had no shutter and the varying voltages at each pixel on its sensor were read randomly and at different rates, and each pixel was connected directly to its corresponding pixel on a display where each of its pixels had its own refresh rate that matched the read rate of the pixel on the sensor.
With this you would be very close to having a “reality camera”. If somehow you could record the voltage and time-code of each pixel on the camera's sensor separately and have a playback device that could present each pixel on the display at its correct time, you might be within arms reach of that sought after Holy Grail. But there would still be something missing.
That missing piece is how our vision perceives motion blur of physical objects. If you stare at the wall and move your hand quickly in front of your eyes, you see your hand as a continuous blur while the wall remains sharp. You know it's your hand but cannot make out any detail. If you do the same thing, but this time, as your hand moves past, you follow it with your eyes, you will now see the detail of your hand, but the wall behind it becomes blurred. As we view the world around us, we are continually shifting our gaze from moving object to moving object, and as we do this, we shift what we see as blurred or sharp. We will never be able to shift our focus of interest in this way while watching a film or video or viewing a scene in virtual reality, no matter how high the frame rate goes.
Here’s another example. Hold a piece of paper in front of you with a single object printed in the middle of it, and staring straight ahead, move the paper quickly in a circle. The movement creates a circular blur of the object. Now, using a computer, open a text editor and type an “X” in the middle of the page. Size the application so it’s smaller than the computer screen, and dragging with the mouse on the applications title bar, move it quickly in a circle several times. The “X” never blurs. You just see a circle of multiple “X”s. It will appear this way no matter how high a refresh rate your monitor has. Because technology can't change biology and physics, it's clear it will need to adapt and compensate for it’s limitations in order to create images that we will perceive as real.
Overcoming the limitations
So how do we overcome this limitation of using images shown to us at a certain frame rate and be able to get smooth motion blur that can change as we switch our view from object to object? I believe the key is a technology David Shapton wrote about a while back.; vector-based video codecs. These codecs capture and encode images in blocks of shapes, and motion is presented by moving and changing the shapes of these blocks a certain amount each frame. We could capture our moving subjects with the vector codec using a high frame and shutter rate and then add motion blur in post as needed.
Using the same technology as the above mentioned future virtual reality device that can re-focus the images by tracking what our eyes are looking at, the same could be done by adding motion blur to all the moving vector objects that our eyes are not following. Moving our gaze from object to object would shift both the actual focus of the images along with the amount of motion blur we see.
A moving object we are not looking at would have motion blur added to the blocks it’s comprised of. Switch our gaze to that object and follow it, and the motion blur of it’s blocks would be reduced and opposing motion blur would be added to the blocks of everything else. Move our eyes or our entire head and all the vectors would be blurred until our vision came to rest at a single spot again.
With all this we would have everything in place for the ultimate virtual reality experience. Though technology has not yet reached the level of computational power required to accomplish what I have proposed, I can see it happening someday. May I be so bold as to call this future virtual reality device the Chalice of Reality Machine?
Main graphic image: shutterstock.com