Huge efforts go into establishing the visual style of 'high end' productions via design and cinematography, followed by detailed amendments in post and finishing. Filmmaking is partly about sustaining a consistent and coherent experience for the viewer by balancing a bundle of creative compromises. Anyone who expects that to change is being naively optimistic or unnecessarily purist. In most situations, amendments involve a reduction in the diversity of colour impressions from shot-to-shot and scene-to-scene. However, there two cultural issues I think it are worth recalling, if improvements to handling colour are not to be limited to minimising unwanted artifacts or systemic aberrations.
There are relatively few examples of 'high end' digital productions tackling the same dramatic material. One of these has been the 'Wallander' tv series produced both by BBC and in Scandinavia. Anyone lucky enough to see both versions will notice a similarity of dramatic style, costume, performance and pacing, but there is one big difference – the landscape, grass, sea and sky are all consistently more intensely coloured in the BBC version. Should we be thinking about general patterns of colour variation from country to country, or people to people and producer to producer that are worth valuing, just as we recognise the artist's palette?
For decades, editors have added colour bars to the head of programmes enabling transmission engineers to make a quite easy match between incoming material and their station standards. For obvious reasons, VOD services like Vimeo, or YouTube want the movie to stand alone. Services like Netflix create dozens of versions of a programme for different bandwidths, framerates, or resolutions, as well as regional versions in NTSC or PAL. Is there a case for incorporating a simple metadata set at the head of a programme to do the same job that colour bars have served, in addition to the general colour profile of the system, while ensuring that the multitude of devices people use can read that data?
Since chips are expensive to design and bring into production, it seems unlikely that the move away from specifically designed high end video chips towards mass produced 'photographic' chips (adapted for the purposes of video) will lead to Pen-tiling and Bayering or similar graphics prioritized technology being superseded. For movies, might it be worth manipulating 'photo' chips to provide 4.4.4 at lower display resolution rather than concentrating on the notion of enhancing the progression from HD to UltraHD and beyond? If increased bandwidth will be available, how might it best be exploited? Could one direction be the creation of Bayer type pixels based on more than three primaries, which might enable finer colour separations for the process of 'pointillist' interaction? Would additional primaries also make sense when amended colour in post, localising regions of the image for correction? Which regions of colour space might be prioritised for development and the identification of additional primaries? One approach might be based on the density of named colours from different cultures, which have neighbouring colour values, but distinctive identities and enhancing the chance of them being successfully reproduced.
There are numerous situations which call for quite specific colour characteristics. Think of the distinctive colour of a football club shirt, corporate style books and branding or colour symbolism in religion. Ideally, filmmakers should be able to refer to them precisely, but is that really so?