<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Do we perceive images using more than pixels?

6 minute read

Replay: We might think we know what makes up our perception of images, such as resolution, colour, and pixels, but what if there's much more to it than that? RedShark's Editor at Large, David Shapton gives us his thoughts on the matter.

It's fair to say that I'm not the youngest writer on the RedShark team. It's also fair to say that my eyesight and my hearing are not what they were. I certainly didn't use to have to wear reading glasses, and I didn't say "pardon" as often when I was younger.

I've always been fascinated by the process of perception, so you can understand why I often consider my own processes and how they've changed over the years.

Because they certainly have changed.

I know can't read anything on my phone without glasses unless it's in ludicrously large type. My glasses correct this but it's a compromise - especially because I also have astigmatism - a different amount and angle in both eyes. (Astigmatism is where your ocular lens is no longer completely spherical. Yes, it's true, I have anamorphic lenses!)

Even though my glasses are good, they only work over a very narrow range. Annoyingly, my TV is just far enough away that they don't work completely at that distance. I could, of course, have varifocals or simply multiple pairs of glasses.

So you might quite reasonably think that I don't care much about images. If my eyesight's not great, then why does it matter. Strangely, I've found that the opposite is the case, and no one is more surprised about that than me.

I may be completely wrong about this, but I think that there's more going on here than meets the sensory organs.

More than resolution

Let's be specific about this. I think there is more in an image than can be resolved by your eyes, and there's more in an audio recording than can be picked up directly by the ears. To put it technically, I think there is an a macro effect, or an epiphenomenon, in addition to the original visual or auditory phenomenon.

I'm sorry to have to use a word like epiphenomenon in a family publication, but it's the best word to describe what I think is happening.  It has various meanings, but the one I'll use is "An epiphenomenon is a secondary phenomenon that occurs alongside or in parallel to a primary phenomenon". I know that's pretty vague, but I'll explain later.

We tend to think of image reproduction as being a matter of resolution. Assuming the projector or display  screen is in focus, then we should be able to judge the sharpness of the image with our eyes. But what if there was something else about the image that gave us clues as to its characteristics? Something beyond the mere state of its pixels.

Let me give you an example: the vernier effect.

What is a vernier scale?

A vernier scale is a way to magnify very small differences by having two measuring scales parallel with each other, but one is slightly shorter than the other. Or it can be the effect you get when to nearly parallel lines cross each other. A very small parallel movement can cause a large movement of the point where they cross. You could argue that some interference patterns are a result of a vernier effect.

My reason for mentioning the vernier effect is that in my view it is a macro effect with very real consequences.

The vernier effect and moire are two sides of the same coin. You see moire patterns when two or more finely spaced patterns overlap in such a way as to interfere with each other. Instead of seeing two layers of a fine mesh, you see a pattern where the physical material in both meshes lines up from your point of view. Moire itself is related to aliasing - the unwanted affects that come from viewing images through a "grid" of sampling. Both effects are usually unwanted, and both occur far more frequently than we realise it. Aliasing is present in any digital image: it's just that it's not normally "exposed" because it is surrounded by detail and complexity.

I'll explain what I mean by that. You normally see aliasing when there's a straight line that's neither parallel nor perpendicular to the grid of pixels. The closer it is to being purely parallel or orthogonal to the horizontal, the worse the visible aliasing. It is, in its own way, another example of the vernier effect. A single line that's almost vertical but offset by two pixels at the top, will have two very sharp kinks in it. Reduce the angular offset by one pixel and the line changes dramatically to display only a single sharp kink: so a tiny movement can make a significant and visible difference to the appearance of the line.

But every curve or line in any of your digital images will display aliasing if you zoom in enough. Most of the time you don't see this if the resolution is high enough (that may be the biggest advantage of higher resolutions: they don't reduce aliasing but they make it smaller and hence reduce the visibility of it), but it is there, nevertheless, in all digital images.

I'm not stating what follows as fact, or even as a viable theory. I'm simply not sure. I'd love to know what you think.

Macro effects

I think there are macro effects in images that we don't see directly, which nevertheless inform our perception and give us extra information about the image. I'm going to call it "natural intrinsic metadata". It's there in every reproduced image. The nature and quantity of it does depend on the resolution.

As humans, we do not perceive directly through the medium of pixels. The job of a good digital image is to make its constituent pixels invisible.

But the fact that I can tell the difference between an HD, a 4K and an 8K image even when I'm not wearing the right glasses, suggests to me that there is a macro effect of the image that I'm picking up. I find it is more apparent with moving images than still ones. That might give a clue. It may even be an effect that we have no current explanation for. That wouldn't be surprising - the field of digital video, especially extremely high resolution - is very young. We are in new territory here.

And it's not just with video.

A few years back, my hearing was damaged when a firework exploded close to my ear. I was pretty despondent. Not only couldn't I follow conversations in noisy environments, but I couldn't pitch sounds correctly any more. As a musician, it was devastating.

But it has improved - a lot. I was told that it probably wouldn't, but it has, and measurably so - I run regular tests with high quality headphones. My hearing is now on a par with anyone of my age: not perfect, but very functional.

Apart from when my hearing was at its worst (when I figured at least I would be able to save money on my next HiFi) I found that I could still hear well enough to spot the difference between mid priced and expensive HiFi systems. This shouldn't have been possible, but, again, I tested it. I am used to assessing audio professionally, and I think this helped, but if I couldn't hear the nuances in the music systems I was critiquing, then you might as well have got a dog to write the reviews.

I think, again, that there might be "signals" in the reproduced audio that act as "helpers" for listeners. An example might be (and this is the audio equivalent of moire, I suppose) higher frequencies than I could hear at the time "beating" together to make lower frequencies. This does happen all the time, but we don't perceive it as anything separate from the overall sound. But perhaps we do unconsciously, in the same was as we don't have to do anything conscious to know, pretty accurately, where a sound is coming from.

Simon Wyndham, the Editor of RedShark, mentioned something to me which I think adds weight to the above. He reminded me that when a high resolution image or video is downsampled to a lower resolution, it will very often look better than if it had been shot at the lower quality. I've seen this many times but I first noticed it when the US hospital drama ER started being shot in HD. I didn't have an HD TV then, but the improvement, even when watching in SD, was striking. I noticed it even before I knew that the show had switched to shooting in HD. You see the same effect when watching 4K on an HD monitor and 8K on a 4K monitor. The general principle that explains this is that if you start with more information as you're downsampling, you'll get a better result. But I wonder if there's more happening here.

Sometimes at a high resolution you can see structures that simply aren't visible at a lower quality. I'm not just talking about tiny objects that are smaller than a (lower resolution) pixel, but much larger structures. Imagine a large square whose boundaries are narrower than one pixel but which are picked up by higher resolution shots. It's very likely that if these are recorded in, say, 4K, they will still be visible when downsampled. This may be a special case but this process might be present all the time in most images. Without being obvious enough for us to notice consciously, these hidden geometries might trigger a flag in our cognitive process that says "detail".

I may be completely wrong about this and I don't normally write about stuff unless I'm pretty sure about it. But I'm making an exception here because these effects might be real. If they are, then the implications of this would be enormous.

RedShark readers are very astute. I'd love to know what you think.

Tags: Technology Featured

Comments