I'm a big fan of vector video. It's the idea that instead of pixels, you describe a scene mathematically. There's nothing new about this – that's exactly how the .pdf format works – except that it's an awful lot more involved when there's video concerned, as well as the nontrivial matter of how you convert pixels into vectors.
One of the reasons I think this is so important is because, in a sense, it mirrors the way we perceive things. Nature is not presented to us through the medium of pixels, which are entirely man-made. When you have problems with your vision, you never, ever see pixels or any type of rectangular artefacts. They're just not part of the natural world.
At least part of what we do when we see things is refer to an internal database. The mechanism for doing this is quite suddenly becoming better understood. Essentially, when we perceive an object, we refer the 'data' we receive through our senses to a repository of things that we already know about. If there's a match, then we say we 'know' the thing we're looking at as well.
The part of the brain responsible for this, the Neocortex, works hierarchically. First, it looks at the small details, then builds a 'bigger picture', until finally it understands the whole scene.
It's possible to mimic this process using a Neural Net and these are cropping up everywhere these days, especially on Google (or should I say "Alphabet").
Here's a living example. Have a go at this game. You have to draw a requested object within 20 seconds. The computer tries to guess what it is. It's often successful, but sometimes, with great humility, it fails.
It's well worth a try. It's like listening to your brain talking out loud. And if you think it seems a bit clunky, just imagine what it would be like with 8K video as a source instead of a few badly-drawn lines. And then imagine the guesswork being thousands, if not millions of times better than this. And then imagine all of that being on a chip inside a video camera.