28 Mar

HEVC/H.265: Everything you need to know

  • Written by 
  • submit to reddit  
HEVC - H.265 HEVC - H.265 RedShark


In a major new article, Phil Rhodes explores the background to HEVC/H.265, and explains what makes it so good at compressing video. Read this if you want to know how almost all video - including 4K - will be delivered in the near future

Since almost the first days of digital video, there’s been a need to reduce the otherwise unmanageable amount of data that uncompressed video represents. While it’s now relatively straightforward to handle standard-definition, and even high definition, video as a sequence of unadulterated bitmaps, the demand for ever-higher resolution means that efficiently reducing the bitrate of video is going to be an issue for as long as people need to sell televisions.

Compression has been around since the early 90s

Video compression became a mass-market technology in the early 90s, with the release of Quicktime in December 1991 and Video for Windows a year or so later. At the time, the performance of video codecs – in terms of the ratio of bitrate to image quality – was limited mainly by the performance of the system that decoded it, which invariably meant the CPU of a desktop PC. Video codecs are generally highly asymmetric, with encoding taking more work than decoding, often multiples of realtime to encode – but they must usually be decoded in realtime. At the time, Intel’s 486 processor line was ascendant, but with performance limited to perhaps fifty million instructions per second, use of an  encoding scheme such as the now-current h.264/MPEG-4 AVC was impractical. Both Video for Windows and Quicktime were initially most often used with Cinepak, a codec  based on wildly different techniques to more modern ones, but with the key feature that it was designed to work in what, by modern standards, seem extremely modest circumstances. Decoding 320 by 240 frames, at the 150 kilobytes per second of a single-speed CD-ROM drive, is something you can absolutely do with h.264 – but you couldn’t have decoded it on the CPU of a Sega Mega Drive (er, Genesis, Americans) games console, circa 1990.

Drive for better quality

The drive for better quality for the bitrate, as well as the need for better absolute quality and higher resolution, is nothing new, and has largely advanced in step with the ability of ever-improving hardware to handle more elaborate codec techniques. Through the late 90s, approaches that are recognisably the technological forerunner of current codecs began to emerge, particularly h.261 in 1998 which was designed to stream video over ISDN lines from 64Kbps upward. Through the last decade or so, and ever-increasing H-numbers (which come from ITU-T Recommendation numbers), the performance of video codecs has improved more or less alongside the ability of affordable electronics to decode them. This is good, given the explosive success of video-on-demand services and the resulting pressures placed on internet and cellular network bandwidth. One would be forgiven for assuming, with maximum cynicism and misanthropy, that the work involved in all this improvement is being done mainly so that people can send us lots more advertising without having to upgrade their technology. Either way, it’s now clear that the internet and various riffs on video over IP technology is what’s going to provide the video-on-demand experience that’s been discussed since the 80s, even if the people who developed the protocols on which the internet runs probably didn’t foresee this use.

« Prev |

Phil Rhodes

Phil Rhodes is a Cinematographer, Technologist, Writer and above all Communicator. Never afraid to speak his mind, and always worth listening to, he's a frequent contributor to RedShark.

RedShark - GET THE APP Apple Store Google Play Store G-Technology

Twitter Feed