11 Feb

There will probably be no 8K. In fact, there will probably be no pixels at all in the future

  • Written by 
  • submit to reddit  
The end of pixels The end of pixels David Shapton/RedShark


Here's another chance to read this controversial article, which is also the most read-ever piece in RedShark. What do you think?

Living memory is a wonderful thing. For anyone born in the last sixty years, it encompasses more change than any other successive generations have ever seen. And one of the things that we’ve got used to is increasing resolutions.

From our perspective, today, it all seems to have happened very quickly. Anyone working in professional video today will have very clear memories of Standard Definition video. Some of us still use it! But the current working paradigm is HD.

Next on the horizon

Next on the horizon is 4K. And, with almost unseemly haste, we’re already talking about 8K. In fact, some organisations, like Sony and the BBC, kind-of lump together any video formats with greater than HD resolution, using expressions like “Beyond Definition” (although in Sony’s case, that also means that resolution isn’t everything and that there are other factors like increased colour gamut and contrast that matter as well).

Everyone wants better pictures. There’s nothing wrong with the principle that - all things being equal - if you can record your images in a high resolution format, then you probably should.

The idea of digital video is now so well established that it’s virtually passed into folklore. At the very least, the word “Pixel” is bandied around as if it’s always been part of the language.

In reality, it’s not been around for very long. Cathode ray tubes don’t use pixels, and nor do VHS recorders or any type of analogue video equipment.

Before Pixels

Before pixels came along, video was recorded as a continuously varying voltage. It wasn’t quantized, except, arguably, by the end of a scanning line and the completion of a video field.

Digital video is exactly that. It’s video represented by digits. It’s rather like “painting by numbers” except that rather than representing an image by drawing lines that separate distinct colours, a regular grid is imposed on the picture. Each element in the grid is a pixel, and it is allocated a number that can be used to look up the colour under that part of the grid. It really is that simple.

But of course, it’s not the best way to represent an image. Nature isn’t made up of a natural grid, and even if it was, it wouldn’t match the superimposed pixel grid.

When you think about it, it really does take a stretch of the imagination to understand how something as subtle and organic as a flower can be represented by a string of binary digits. The two concepts might as well exist in different universes. And actually they do: the analogue domain and the digital domain.

But the miracle of digital video is that if you have enough pixels, you won’t notice them. Your mind sees the digital image as if it were an analogue one, as long as you don’t get too close.

That’s the thing. If you don’t have enough pixels and you’re sitting too close, you’ll be able to see the grid.

Most people reading this know this stuff already, and I’m reiterating this part of the theory simply to show that pixels aren’t ultimately the best way to represent images. Yes, if you go to HD for “normal” sized TVs in the living room, it looks good; great, even. And if you want a TV that’s twice that size (and four times the area) then it absolutely makes sense to move to 4K.

Genuine reasons for 8K

There are genuine reasons why you might want to have 8K. For example, even if you can’t see the individual pixels in HD or 4K, if you look closely at diagonal lines, you can see jagged edges, and the closer the line is to horizontal or vertical, the worse it gets. You could even say that aliasing magnifies the pixelation by making it more noticeable. Quadrupling the number of pixels reduces this.

There are a number of developments that make me think that not only are there other improvements that will be of more benefit than the pain and expense of moving up to 8K, but that perhaps we might move away from having pixels at all.

Now, please note that we will probably always have to have pixels when it comes to displaying pictures. Unless we invent a digital method to display video that has some kind of organic, non grid-based method, then we will always see the world through a spatially quantized grid when we see recorded or transmitted images. But what will, I think, change radically, is how we store the video.

Vector Video

What I think will happen is that we will move towards vector video.

If you’re a graphic artist, of if you’ve ever played or worked with Corel Draw or Adobe Illustrator over the last thirty years or so, then you’ll be familiar with the distinction between vector and bitmap images. A bitmap is the familiar grid of pixels, each with a set of numbers that describes the colour of the individual square. A vector is completely different. Instead of explicitly stating the colours of each and every part of the object, a vector is a description. You could almost think of it as extremely detailed metadata.

« Prev |




Not registered? Sign up now

Lost your password?

Load Previous Comments
  • Australian IMAX film maker John Weiley has suggested that a sensor of 100 megapixels or more is about the resolution of the human eye so if we have means of viewing such high resolution images, will our brain think it is real? And if so, how will we react to such films? I'm thinking of the dramatic effect that Orson Welles' first radio broadcast of 'War of the Worlds' had on an unsuspecting American audience in 1938.

    0 Like
  • I guess I'm not understanding this? Vector based drawing doesn't address individual pixels but still a screen itself will need some way to display a description from a vector based file. I cant draw a line inside 1 pixel, I can only tell it to luminece certain colors at certain times, or no colors at all. therefore the clarity of your vector based image is still dependent on the actual resolution of the monitor, no? I can see vector based images being important when you have the possibility of viewing on multiple resolution monitors, but clarity itself is dependent on the resolution of the monitor you view it on - I would think...

    0 Like
  • Vectors are still always displayed in pixels. All monitors display pixels. It's not a matter of improving clarity for display purposes, it's about effectively containing all of the data. For instance, If you want to draw a circle using pixels, you would do so with thousands of pixels of various shades of colors resulting in an antialiased edge of the circle. A vector circle would only need a radius and a color. That's FAR less information required. This is pretty much meaningless for 1080p videos, but for 4k and 8k... it would be a huge advantage. Why store 10 quadrillion pixels when you can store thousands of vectors of all sorts of sizes and shapes with solid colors or gradients? Plus, vectors would be extremely easy to morph their shape for animation purposes.

    Then for the topic of film noise, you could simply overlay an animating noise layer on top of the video. As an added bonus, you could give that control to either the movie makers or the home consumer in the form of options for their media player. You can control the amount of film grain easily if it's not hardcoded into the video.

    TL;DR: It's not about visual clarity, it's about storage and performance concerns for extremely high resolution displays.

    0 Like
  • It seems to me that you underestimate the complexity of the task.
    Even if you detect all the curves and/or surfaces that the scene is made of, there remains the task of correct lighting. Realistic lighting easily takes hours of processing time for each frame. It may be approximated in realtime, but in this case the movie will look like a videogame.
    Anyhow, when vectorizing a photo, you always have to find a balance between size and quality, and in order to have a gain in file size you must sacrifice some details. This is exactly opposite to what 8K promises.

    0 Like
  • I very agree with the idea that future codecs will be vector based and (pixel) resolution independent.
    Like shown in the video this is and will be possible in (hopefully) the not too distant future.

    Though when it comes to digital editing of the footage, vector based videos are highly counter productive.
    Pretty much every shading language currently around is based on rasterisation,
    so are our graphics cards and I don't see that this will change without any new groundbreaking discoveries (like said biological display).

    We need shaders to do all the fancy things we do in digital video editing, including the most simple color correction.
    Vectors are so much more complicated than pixels, when it comes to color, there is no real alternative (as far as I know) than breaking the color down into little easy to use segments, aka pixels. While machines can handle both things easily, its hard for a human to handle vector math and write the math to alter color information in a vector based approach in the way we intend to.

    I also think that the comparison to CRT screens and analog film is a bit far fetched. Analog monitors and film aren't any closer to reality than a binary approach to represent an image. There is actually not really a difference in the logic between altering voltage and altering binary numbers, in the end they are both a mathematical approach of determining color values with fixed value ranges. Both things are just limited by different factors.

    For analog video its the inability to read and write voltage at a 100% accurate level. For digital video the limitations are more limited by our own definitions, current digital processors and camera sensor. In theory the bit depth and resolution of videos could go a lot higher than the current video standards allow, we just limit our selfs because our technology isn't there yet, at least not for the mass market.

    I liked the article nonetheless.

    0 Like
  • I've been daydreaming about a codec called Vector Video fo the last 5 years. So to see this article and see you call it out by the same name makes me very happy. It's and ego thing. It means that logically I'm on a right track in my thinking. :)

    That being said, I think Vector Video will lay a huge role in fuure AR (Augmented Reality systems. When we all replace our smart phones for eye glasses that that composite virtual elements over the top of the real world, while blending them in with the real world. In a situation like this, having vector video elements that scale will be hugely beneficial to pulling off the believability of these scapes we view through these glasss.

    So when talking about these technologies we must not just consider a flat screen like we have today, we have to consider the entire shift of video in the future toward something more immersive. I imagine that eventualy we'll be watching stage plays in our living room. Since everyone has a different size living room those elements will need to resize and reposition themselves to fit proper scale. So that a human being standing in front of us look the size of any real life human being that might be standing in front of us.

    The other benefit to vector in this scenario is that those object being vector may be easier to be extruded three dimensionally. I'm not 100% sure of that, but in my head it makes sense.

    Eventually all video will be animation. It sort of is today. So when we have these discussions in the future we have to keep in mind that a photograph is never reality, just a certain representation of reality. In much the same way a description in a novel is a represention of reality. Vector Video when scaled to a certain size may begin to look more like animation, but that's ok.

    0 Like
  • I'm no mathematician, but isn't this basically what DCT compression does anyway - expressing sampled values in terms of overlayed sine waves?

    0 Like
  • I don't think that the future of imaging is being vector based, but I am sure it is recording every bit of light in space and time, from which you can build any visual resemblance in terms of resolution, motion blur, etc.

    Counting photons, very simple.

    Of course we need to abstract this, but its where we are heading, at least in recording. Compression for distribution is a completely different beast and I can imagine that we will see 3d rendered content based on analyzing video content and creating a corresponding 3d scenery instead, which might be easier and smaller to transfer than a compressed video - plus the fact you can change perspective (and if its only for doing stereoscopic 3d) this way.

    0 Like
  • Yeah, well, haven't we be doing this for awhile already? We see it every day in movie theaters as DCI uses JPEG2000. As far as I see a wavelet transform produces pure vector data. Of course, you have to acquire an image before you can transform it and your acquisition involves creation of quantized data by imposing a scanning frequency and a frame rate. When scaling up you also always need to produce interpolated data that you don't have in your original information. Vector scaling just looks a lot more pleasant than raster scaling (it doesn't get soft for example) but it has it's own kind of artifacts like obvious absence of detail if you scale too much. AFAIK all up to date codecs use some wavelet schemes -- as decoding capabilities get better I'm sure we can leave old pixel based schemes like DCT behind us altogether. The bottom line is that vectors are just a very clean way of storing dimensional information, the data still has to come from somewhere.

    0 Like
  • And there is already a sense in which we have vector video, in the form of CGI animations. 3D animations are based on models that are nothing more than 3D vector descriptions, with the addition of textures. It would be very easy to build a driver that would output these as vector video.

    Well, the problem with that analogy is that the part that makes the CG scenes look like reality, the textures, are digitally stored files, so it kind of falls apart there. Also, the vectors will be approximations of reality, with rounding errors and shifting errors which would somewhat correlate to the rounding and shifting you get with the resolution and bit-depth of digital files, resulting in similar problems.
    You have probably never seen a vector based image from Illustrator that doesn't use digitally stored textures that would fool anyone into thinking it was a photograph, I certainly haven't, but would like to.

    0 Like
David Shapton

David is the Editor In Chief of RedShark Publications. He's been a professional columnist and author since 1998, when he started writing for the European Music Technology magazine Sound on Sound. David has worked with professional digital audio and video for the last 25 years.

Twitter Feed