Researchers at the University of Bath claim to have invented a codec based not on pixels but on vectors. Vector graphics have been around for a long time but until now they have not been considered to be suitable for general purpose image work - and certainly not for video - because while they have distinct advantages, they are difficult to use where images are complex
A vector graphic is essentially a mathematical description of a scene. With pixel matrices - sensors, in other words (whose resultant images are often described as "bitmaps") a scene can be captured and reproduced irrespective of the complexity. Even if every pixel is different, this is no harder to capture than if they were all the same, subject only to the optical accuity of the device. Of course the story is different when it comes to compressing those images, but that's not the issue here.
Vectors break down a scene into primitive objects. For example, an upper case letter "I" in a sans-serif font would be represented as a bitmap by ten or twelve vertical pixels (depending on the size of the font versus the resolution of the display). A vector description might say "A letter "I" is a black rectangle whose length is twelve times its width". More complex letters require more complexity in their descriptions, but the principle is exactly the same. "Primitive" objects can be grouped together into more complex ones, and, while the most basic elements are "outlines" and "Fills", other, more subtle attributes such as "Gradients" can be specified.
You can see how the complexity of descriptions increases quickly as the level of detail in a scene rises. To match bitmap accuracy in a complex scene, you'd need a very long description, and a lot of computing power to encode and decode.
But if you could do this, the rewards would be huge.
The benefits of vectors
For a start, playback would be resolution-independent. It wouldn't matter how much detail your display device could resolve, because the vector images would always be decoded at the exact resolution of your screen. Bandwidth could be saved merely by sending less detail. The result would be just as sharp, but certain features would be less detailed, or you could say, more ambiguous.
And frame-rates wouldn't matter either, because you could decode the video at whatever framerate you like. Vectors would simply morph from one set of objects to another.
This is pretty profound stuff. On the face of it, we're saying that all this excitement about 4K and 8K is worthless, because in future we won't be interested in pixels at all.
But that's not strictly true. Because, to get to the vector encoding stage, you still need to capture the image as… pixels.
Just how good the Bath University codec will be depends on a lot of things. Initially, it's probably going to find its main role in life as a low-bitrate codec. A very sharp image can look more real than a less sharp one that actually contains more detail. And half way between those is probably a sweet-spot where this codec could operate.
General-purpose vector-based codecs are still a long way off, but until now, some people were dismissing them as impossible. Well, that's clearly wrong now, and, while 8K video display is seen as the next logical progression after 4K, the data rates you need for 8K video transmission or streaming are so high, that I'm pretty sure that before we get there, there will be questions about whether a non pixel-based codec might be better for the future of video.