<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Believing is seeing

4 minute read
Image: Shutterstock.

Replay: When you have a memory, is it ever out of focus? Does it suffer from rolling shutter or moire? There’s a very good reason why the answer is “no”.

As we learn more about AI techniques, we’re learning about real intelligence as well. It’s a hard thing to explain. A new book, “A thousand brains: a new theory of Intelligence” by Jeff Hawkins is the best I’ve seen. Intelligence, consciousness and memory are three aspects of our living selves that are both fundamental to our mental lives, and yet fundamentally hard to explain.

But even though we may not be able to fully explain these phenomena, we can at least observe how they function, by looking at how we, individually, perceive things.


Perception is a bit of an open book. When you say the phrase “I perceive”, you’re actually referring to a long list of parallel and serial processes that lay on the path back towards the thing you are perceiving, as it enters conscious awareness.

So far so deeply philosophical. In more practical terms, in order to improve the video experience, I think we should be looking more towards how our perception works than how to cram more pixels into the archaic unit of measurement that we call an “inch”.

But how is this even relevant? We’ve managed up to this stage perfectly well without going inside our heads. Surely the priority will always be to get the best possible image on a screen?

Well, for a start, that does make the rather big assumption that there will always be screens. There won’t be. Because screens are too big, they’re too flat, they’re not very portable, and I believe that screens will end up in the same relation to “video” as semaphore does to modern radio communication.

You can see ultimately where I think this will end up in my article about the Metaverse, published recently on RedShark here. But until then, let’s try a thought experiment.

I don’t know about you but when I was growing up, most schools had some sort of grass field where you could run around and get rid of your excess energy in school breaks. You could also play football/netball/cricket (substitute your national field sports here…).

As kids, we’d spend a lot of time on these fields. I can remember mine pretty quickly, and when I do think about it, I picture it in golden sunshine as I imagine the warmth on my face. I’m not sure whether this is an accurate memory or just a nostalgic “look” that I grade my memories with.

Here’s where it gets interesting.

Try to remember a square foot of that grass. Look closely at it. You’ll immediately see that it’s not perfect. Grass rarely is. Look even closer and you’ll see that there are several different types of grass, and some plants that aren’t grass at all. Move away and they merge into a mottled green mass. Move even closer and something very surprising happens.

Zoom your mind into a single blade of grass. What does it look like? Is it a vaguely defined patch of generic grass green? Not in my memory. I can clearly “see” the details on the object. I can see vertical ridges and variations of colour. I can see the edges very sharply. I can even go even closer and start to see hints of a cellular structure.

What did I just say? Surely I can’t remember in that detail. Nor have my eyes ever acted like a microscope. And where am I storing all these petabytes of remembered pixels?

The answer is that of course I’m not. There’s no way that I could possibly be doing that. Nor would it make sense to. We don’t think in pixels.

I’m not going to speculate in too much detail about how we do this. If you have anything like the same experience as me when you’re remembering things in the past (it’s not just grass: try it with any recollected scene). Above everything else is the question of what seems real. My memories seem real. My dreams seem real as well, even though they’re often absurd and, well, impossible with our current knowledge of physics. It’s almost like there’s a function in our brains that is a “reality dispenser”. A bit like putting tomato sauce on a hot dog, or mayonnaise on fries (I guess that’s a European thing) it gives it a “flavour” of reality.

I wonder if this dispenser is at work all the time? Even when we’re looking at or perceiving real things? Maybe there’s no difference between a real-seeming dream and “real” things out there in the world except that one is guided by our perception and the other by a random sequence of memories and assumptions about those memories that we call “dreams.

At which point you might reasonably ask “what on earth is he talking about”.

The perception disconnect

My point is that there is a disconnect (or indeed there never was a solid connection) between what we see on a screen and what we perceive. Often, if not usually, there is a correspondence. We’ll never know precisely because we can’t look into each other’s minds. But the sense of reality or realism that we sense and that we feel is probably derived from some sort of endorsement from our brain that is based on previous or learned knowledge. Maybe in our minds we have a kind of database that stores idealised versions of the things we see in the outside world.

This isn’t a new idea. The Greek philosopher Plato hinted at it. I doubt if he had video resolutions in mind but what this does show you is that perception is deep and nuanced and hard to understand. And that’s precisely why the next stage in the evolution of video should be aimed at perception and not pixels.

As a footnote, I just want to mention that I’ve had some fascinating discussions with people who actually are looking in this direction. I’ll write more about it soon.

Tags: Technology VR & AR Featured Futurism