If you think video processing stops when the video hits the screen, you're about as wrong as it's possible to be!
It’s very tempting to think that what the brain wants to see is pixels. If that weren’t the case, then why is the video industry constantly striving to bring us more of the little light-emitting things?
But the way our brain “sees” things is much more complicated than a camera sensor. After all, we ultimately have to “make sense” of what we see. When we see things we're doing much more than simply perceiving bitmaps.
This topic is going to be increasingly important as we move forward. Screens may ultimately disappear, as we allow technology to “talk” directly to the brain, and when they do that, we’ll be asking questions about what type of “language” they will speak.
In fact, there’s nothing new in the idea of “brain talk”. I’m going to give you a few examples of how our brains “see” what’s out there, and how much of what it “sees” is actually completely made up.
In a later, longer, article, we’ll look at how the mammalian neocortex has grown to give humans a massive cognitive advantage in the way it “sees” the world. But for now, I just want to show you a couple of really odd artifacts of the way that our brains work.
Remember the Cathode Ray Tube?
Most of us remember when computer screens were made using Cathode Ray Tube (CRT) technology. Even though they were capable of remarkably good pictures, until the late ‘90s (that’s about twenty years ago) they were curved. You just had to get used to the fact that you were watching a screen that was essentially part of a much larger sphere.
And for most of us, it really didn’t matter. We adjusted to it, at least in the sense that most of us never though about it much at all.
And then, still very firmly in the era of CRTs, came displays with flat glass. This seemed remarkable at the time, and I remember it made me feel like I was reading from an illuminated piece of paper. At the same time, something really strange happened.
Many people, including me, felt like they were watching a screen that was actually not flat, but was curved inwards - ie the opposite direction of curvature to a conventional CRT.
In fact, this illusion was so strong, and so pervasive, that manufacturers built in an adjustment to apply a convex "filter" to the display, to counteract the concave "filter" that our brains were applying.
Why did they have to do this?
Because all of our years of experience of watching CRTs had taught our brains to "see" a flat screen surface when we were looking at a curved one. Back then, when we watched TV, we were doing so through a concave filter: just enough to counteract the curvature of the CRT display.
The way we did this was more sophisticated than you might think at first, because, when we looked away from the screen, everything looked normal. The "filter" was obviously a contingent one and our brains were applying it in the form of a simple algorithm: "If this brain knows that it is watching a Cathode Ray Tube display, then apply a corrective concave filter, but only to the display surface of the tube".
A couple of years after the "flat display CRT" screens first arrived, I was able to test and prove this hypothesis, by looking at a completely flat screen: one of the newly arrived plasma screens.
These displays, even though they weren't exactly depth-less, were clearly different from a CRT. At the very least, they were very much wider and taller than they were deep.
And when I first saw one, what happened? Did my brain correct for curvature? No. Not at all. Because my brain knew that this wasn't a
CRT. It was a genuine flat screen display, so no curvature of the screen was expected, and none was perceived nor corrected for.
So the algorithm was more sophisticated in this case. It was (something like) "Is this a CRT-type flat screen? (and you an tell this if the screen is almost as deep as it is wide)? If it is then apply the concave filter correction. If it isn't (and you can tell this because with a plasma screen, the depth is very significantly less than the width of the screen) then don't apply the concave correction.
Far from this being a one-off example, I suspect that almost everything we see has this type of "filter" going on. Most of the time it's very subtle and very much part of what we're used seeing. Most of the time we don't notice it.
But just occasionally, there's some anomaly that makes it absolutely clear that the brain is doing a significant amount of "filtering"
I came across an example the other day and it involves the way that RedShark works.
RedShark is an online newspaper. The online paradigm extends to the way we put it together. We don't have a desktop publishing application; instead we use a web-based one, called Joomla. (We have our own in-house programming team to maintain and customise it.)
One of the things we can do, because of the online nature of RedShark, is see very accurately (but anonymously, of course) how many people are reading RedShark. We have several pages of information that we can turn to, but one of the most interesting is the "page impressions per second" display, which shows us in real time how many people are clicking on our site.
It's a clever display that scrolls from right to left, and, every second or so, it shows us how many people are reading our pages.
It always scrolls from right to left at the same speed, regardless of whether we're very busy or a bit quieter.
A couple of weeks ago, there must have been a problem with this part of the program, because it would stop scrolling after about ten seconds.
But the odd thing was, that it didn't look like it had stopped scrolling. Even though you could see that the bars making up the chart were not coming in from the right nor going off the left edge, they still looked as though they were moving. Moving bars have to go off the edge of the chart at some point, and these didn't, because they weren't moving. But they still looked as if they were moving. Even despite that apparent contradiction.
It was a bit like that sensation you get when you have been spinning for some reason and then stop: the the room carries on moving. We were just so used to seeing these histogram bars moving that we were simply unable to comprehend the notion that they might be static, for the first time, ever.
It's your brain doing this. And if your brain can invoke illusions that are this powerful, what else can it do?
Where's all this leading to?
Where is all this leading to? Not to any specific conclusion in this article, except to say that there is much more to the way we see the world (and feature films, for example) than the light, colour and movement portrayed by the pixels on the screen.
The reality (if that's what it is!) is that our brains are not only filling in most of the detail, but applying assumptions that are sometimes completely wrong.
It's as if we have a "reality engine" in our brains, that we send clues to from the outside. The reality is generated in our heads.
Some scientists have argued that this "reality engine" is responsible for our dreams too. That's why dreams seem... so real.
And perhaps it means that the only difference between our dreams and our waking perception is that when we're not asleep, our reality engine is guided by our perceptions.
All of which makes me think that to make video better, we're wasting our time chasing pixels. What we should be chasing instead is the "language" of our brain's reality engine.
We'll be looking into this in much more detail over the next few months, because this, not more pixels, is the future of video.