RedShark News

03 Dec

There will probably be no 8K. In fact, there will probably be no pixels at all in the future Featured

  • Written by 
  • submit to reddit  
The end of pixels The end of pixels David Shapton/RedShark


There's a seemingly unstoppable trend towards more and more pixels. Greater resolution is heralded as the future of video. David Shapton doesn't think it is. He thinks there is another way. It's a radical suggestion, but completely plausible

Living memory is a wonderful thing. For anyone born in the last sixty years, it encompasses more change than any other successive generations have ever seen. And one of the things that we’ve got used to is increasing resolutions.

From our perspective, today, it all seems to have happened very quickly. Anyone working in professional video today will have very clear memories of Standard Definition video. Some of us still use it! But the current working paradigm is HD.

Next on the horizon

Next on the horizon is 4K. And, with almost unseemly haste, we’re already talking about 8K. In fact, some organisations, like Sony and the BBC, kind-of lump together any video formats with greater than HD resolution, using expressions like “Beyond Definition” (although in Sony’s case, that also means that resolution isn’t everything and that there are other factors like increased colour gamut and contrast that matter as well).

Everyone wants better pictures. There’s nothing wrong with the principle that - all things being equal - if you can record your images in a high resolution format, then you probably should.

The idea of digital video is now so well established that it’s virtually passed into folklore. At the very least, the word “Pixel” is bandied around as if it’s always been part of the language.

In reality, it’s not been around for very long. Cathode ray tubes don’t use pixels, and nor do VHS recorders or any type of analogue video equipment.

Before Pixels

Before pixels came along, video was recorded as a continuously varying voltage. It wasn’t quantized, except, arguably, by the end of a scanning line and the completion of a video field.

Digital video is exactly that. It’s video represented by digits. It’s rather like “painting by numbers” except that rather than representing an image by drawing lines that separate distinct colours, a regular grid is imposed on the picture. Each element in the grid is a pixel, and it is allocated a number that can be used to look up the colour under that part of the grid. It really is that simple.

But of course, it’s not the best way to represent an image. Nature isn’t made up of a natural grid, and even if it was, it wouldn’t match the superimposed pixel grid.

When you think about it, it really does take a stretch of the imagination to understand how something as subtle and organic as a flower can be represented by a string of binary digits. The two concepts might as well exist in different universes. And actually they do: the analogue domain and the digital domain.

« Prev |




Not registered? Sign up now

Lost your password?

Load Previous Comments
  • Hi Gavin

    You make some very good points here, and, using current technology, some of these issues are insurmountable. We would certainly have to rethink how we process video if it was in the form of vectors, but maybe we would continue to convert to pixels for this (just as DI was used in the past - in a higher resolution). Even things like blur can be vectorised.

    There could be a vector camera - I mentioned the technique of "autotrace" in the article. You'd need masses of processing power but that won't be a problem in a few years time.

    Yes, it gets (massively) inefficient if you're vectorising noise - which is essentially random pixels: exactly what we're trying to get away from.

    My next article about this is going to focus on the "meaning" of an image rather than the appearance of it. This is how you extract an image from the background. It's how you create meaning out of noise.

    I do think that when you're talking about how the eyes work, it's not simply (and even this isn't simple!) a question of how we turn light into meaningful never impulses. It's much more how the high and low levels of perception work. As far as I know, there are at least twelve distinct mechanisms in play here - and probably more. That's the level I think we will be working at in the future. Not just Vectors or Pixels, but the raw elements of perception. Direct input to the brain in its own primitives. That's the ultimate direction that video's going.

    Comment last edited on about 1 year ago by David Shapton
    0 Like
  • This has all been addressed before, in print. The history includes names like; Iterated Systems, Altamira, Lizard Tech and onOne Software. onOne still offers image processing software that includes a fractal scaling function.

    When fractal image compression did not make it mainstream those companies backing the technology tapped it for use when scaling relatively low resolution images to large sizes. It produces superior results when compared to all the various raster scaling algorithms. As digital still cameras were reaching 4 mpixels and starting to be used by pros fractal scaling saw some success in commercial applications.

    Fractal compression, or scaling is compute intensive, which was it's downfall in the 1990's. Current generations of silicon, along with improved software, may make it applicable to moving pictures.

    0 Like
  • Michael,

    Partly, yes, I remember all of those, But this is different to fractal compression in that it is literally more semantic. It's much higher level, except for the very small detail. We may indeed find that the devil is in the detail. Right now, I don't know whether it's more or less efficient than fractals.

    And yes, definitely, our ability to compute is two or three orders of magnitude greater than back then.

    0 Like
  • Australian IMAX film maker John Weiley has suggested that a sensor of 100 megapixels or more is about the resolution of the human eye so if we have means of viewing such high resolution images, will our brain think it is real? And if so, how will we react to such films? I'm thinking of the dramatic effect that Orson Welles' first radio broadcast of 'War of the Worlds' had on an unsuspecting American audience in 1938.

    0 Like
  • I guess I'm not understanding this? Vector based drawing doesn't address individual pixels but still a screen itself will need some way to display a description from a vector based file. I cant draw a line inside 1 pixel, I can only tell it to luminece certain colors at certain times, or no colors at all. therefore the clarity of your vector based image is still dependent on the actual resolution of the monitor, no? I can see vector based images being important when you have the possibility of viewing on multiple resolution monitors, but clarity itself is dependent on the resolution of the monitor you view it on - I would think...

    0 Like
  • Vectors are still always displayed in pixels. All monitors display pixels. It's not a matter of improving clarity for display purposes, it's about effectively containing all of the data. For instance, If you want to draw a circle using pixels, you would do so with thousands of pixels of various shades of colors resulting in an antialiased edge of the circle. A vector circle would only need a radius and a color. That's FAR less information required. This is pretty much meaningless for 1080p videos, but for 4k and 8k... it would be a huge advantage. Why store 10 quadrillion pixels when you can store thousands of vectors of all sorts of sizes and shapes with solid colors or gradients? Plus, vectors would be extremely easy to morph their shape for animation purposes.

    Then for the topic of film noise, you could simply overlay an animating noise layer on top of the video. As an added bonus, you could give that control to either the movie makers or the home consumer in the form of options for their media player. You can control the amount of film grain easily if it's not hardcoded into the video.

    TL;DR: It's not about visual clarity, it's about storage and performance concerns for extremely high resolution displays.

    0 Like
  • It seems to me that you underestimate the complexity of the task.
    Even if you detect all the curves and/or surfaces that the scene is made of, there remains the task of correct lighting. Realistic lighting easily takes hours of processing time for each frame. It may be approximated in realtime, but in this case the movie will look like a videogame.
    Anyhow, when vectorizing a photo, you always have to find a balance between size and quality, and in order to have a gain in file size you must sacrifice some details. This is exactly opposite to what 8K promises.

    0 Like
  • I very agree with the idea that future codecs will be vector based and (pixel) resolution independent.
    Like shown in the video this is and will be possible in (hopefully) the not too distant future.

    Though when it comes to digital editing of the footage, vector based videos are highly counter productive.
    Pretty much every shading language currently around is based on rasterisation,
    so are our graphics cards and I don't see that this will change without any new groundbreaking discoveries (like said biological display).

    We need shaders to do all the fancy things we do in digital video editing, including the most simple color correction.
    Vectors are so much more complicated than pixels, when it comes to color, there is no real alternative (as far as I know) than breaking the color down into little easy to use segments, aka pixels. While machines can handle both things easily, its hard for a human to handle vector math and write the math to alter color information in a vector based approach in the way we intend to.

    I also think that the comparison to CRT screens and analog film is a bit far fetched. Analog monitors and film aren't any closer to reality than a binary approach to represent an image. There is actually not really a difference in the logic between altering voltage and altering binary numbers, in the end they are both a mathematical approach of determining color values with fixed value ranges. Both things are just limited by different factors.

    For analog video its the inability to read and write voltage at a 100% accurate level. For digital video the limitations are more limited by our own definitions, current digital processors and camera sensor. In theory the bit depth and resolution of videos could go a lot higher than the current video standards allow, we just limit our selfs because our technology isn't there yet, at least not for the mass market.

    I liked the article nonetheless.

    0 Like
  • I've been daydreaming about a codec called Vector Video fo the last 5 years. So to see this article and see you call it out by the same name makes me very happy. It's and ego thing. It means that logically I'm on a right track in my thinking. :)

    That being said, I think Vector Video will lay a huge role in fuure AR (Augmented Reality systems. When we all replace our smart phones for eye glasses that that composite virtual elements over the top of the real world, while blending them in with the real world. In a situation like this, having vector video elements that scale will be hugely beneficial to pulling off the believability of these scapes we view through these glasss.

    So when talking about these technologies we must not just consider a flat screen like we have today, we have to consider the entire shift of video in the future toward something more immersive. I imagine that eventualy we'll be watching stage plays in our living room. Since everyone has a different size living room those elements will need to resize and reposition themselves to fit proper scale. So that a human being standing in front of us look the size of any real life human being that might be standing in front of us.

    The other benefit to vector in this scenario is that those object being vector may be easier to be extruded three dimensionally. I'm not 100% sure of that, but in my head it makes sense.

    Eventually all video will be animation. It sort of is today. So when we have these discussions in the future we have to keep in mind that a photograph is never reality, just a certain representation of reality. In much the same way a description in a novel is a represention of reality. Vector Video when scaled to a certain size may begin to look more like animation, but that's ok.

    0 Like
  • I'm no mathematician, but isn't this basically what DCT compression does anyway - expressing sampled values in terms of overlayed sine waves?

    0 Like
David Shapton

David is the Editor In Chief of RedShark Publications. He's been a professional columnist and author since 1998, when he started writing for the European Music Technology magazine Sound on Sound. David has worked with professional digital audio and video for the last 25 years.

EditShare 2015 © All rights reserved. EditShare Logo

Top Desktop version

music Are you sure that you want to switch to desktop version?