How exactly they do this is obviously their commercial secret, but it is probably based on well-known principles. And one of these is bound to be that some material is easier to compress than other material.
Think of it like this.
You can compress a three hour feature film to only a few bytes of information, as long as you have the right material. If you chose this option, though, you'd be rather restricted in your choice of storyline. You could only do it if every pixel of every frame of the film was the same colour. This would significantly challenge most scriptwriters, but it would mean that your you would reproduce the entire film with perfect clarity with only the following instructions:
"Every pixel in the first frame is (insert colour here)"
"Every frame is the same as the first"
At the other end of the scale, if your film involved random noise, such that every pixel was different in a random way to every other, across a frame and across time, then it would actually be incompressible.
Somewhere in-between those two extremes comes what Beamr is doing.
For a start, if you reduce the amount of noise in a picture, then it will encode with less data. And if you turn down the high frequencies (i.e. the sharp edges and transitions) then you can compress more efficiently.
The skill is in knowing where you can apply these tricks (and undoubtedly others) without any noticeable degradation, and to do it automatically. This seems to be what Beamr are able to do.
But why would you need to do this with H.265 now on the scene? There are at least two reasons for this.
First, you can use all the existing infrastructure. The whole world of video streaming is based around H.264 and the Beamr solution will work with all of it.
Second, H.265 isn't just H.264 but better, it's H.264 but harder as well. It takes more processing to decode it, and a lot more to encode it. Whereas it's no harder to decode Beamr-optimised video than any other H.264 stream.
More info on Beamr here