Another chance to read how you can make HD look like 4K. It's really not that difficult, when you consider that it's hard to make 4K look like 4K!
I think people who make videos and films are sometimes a little unclear about the relationship between 4K and HD. It’s something we’ve already touched on in RedShark. The most important message is this: that 4K is four times the data rate of HD, and you’ll need four times the space to see it.
But while there are four times as many pixels as HD, but the picture is not four times better. It’s actually only twice as good, if you measure the “goodness” of a picture by the number of pixels in a straight line.
Just to be completely clear about this, 4K has only twice the linear resolution of HD. This has some very important consequences.
Perhaps the most important of which is that if you degrade the sharpness of a 4K image by as little as just over half a pixel, you might as well have started in HD.
Half a pixel! That’s one eight thousandth of the width of the screen. You do’t have to have much wrong with your picture to lose its 4K-ness in an instant.
Remember that when we judge the sharpness of a picture, we’re not actually measuring it. What we’re absolutely not doing is counting pixels and how they merge with their neighbours. We base our impressions of sharpness on - impressions. We can be fooled into thinking that a picture is sharper than it is.
How does "sharpness" work?
Do you know how the “sharpness” control works in Photoshop? Well, I’m pretty sure that it’s evolved into something quite sophisticated, but the basic principle is rather simple. Our perception of how sharp an image is depends on edges. The more distinct edges we see, the sharper we think the picture is. When we sharpen a picture, we’re not adding information. How could we be? If adding information was that easy then I could just write half this article and move the "detail" slider another fifity percent and it would be finished!
What we’re doing is saying “whenever you see an element in the image that is changing more rapidly than the others, make it change even more rapidly".
Make it sharper, in other words.
The sharpness of an image depends on frequency response. That probably sounds like it should be in an article about audio rather than video, but there’s really no difference. Think of a chessboard. If you were to scan a laser beam across the surface and measure the reflections, you’d essentially get no reflection from the black squares, and a 100% reflection from the white ones. When you move from black to white, the change should be instant, subject only to the width of the laser beam.
That instantaneous change represents a high frequency at that instant. Whether you actually see an instantaneous change depends on the ability of your recording, transmission and display system to reproduce it.
High frequencies mean more information. If you want to reproduce twice the frequency, then, in a digital system, you have to have twice the sample rate. That means twice the data. One of the easiest ways to reduce data rates is to limit high frequencies. In a sense, all you need to reduce your data rate by half or more is to introduce a low-pass filter (one that only lets lower frequencies through). Most recording, transmission and display systems do this anyway in the way that they handle the information and it’s certainly a fundamental aspect of the way that most codecs work.
Let’s go back to the chessboard. What’s the effect of filtering the high frequencies out? The edges between the squares look blurred. They look like that because there’s less information to describe exactly where they are. Remember: sharpness is all about information.
Artificially boosting high frequencies in an HD image can make it look significantly sharper - enough, perhaps, to make it look like it’s 4K.