<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Why are we still using workstations?

4 minute read

ESO / RedShark NewsThe Return of the Mainframe

Phil Rhodes believes that we're at the precipice of fundamental change in both computing and our relationship to it that harkens back to the mainframes and network terminals of the 1970s.

This is a discussion about several coincident subjects: cameras and recording media, workstations, tablets and phones, workflows and communications over that thing that we used to call the internet, but is increasingly called 'the cloud' for reasons of commercial expediency. Something strange is happening – but if anything, there's a slightly surprising lack of realisation going on as to what can really be achieved.

Let's look at the history. For most of the last four or five decades, generally speaking, computing has been driven by a constant need to advance, to speed up, to become more capable. This is what has made it possible to handle images that can (without wanting to start an argument) at least technically rival those of 35mm and even 65mm motion picture negative without the inconveniences of photochemistry. Recently though, there's been something of a reversal. The boom area isn't in the cutting edge of performance anymore. We're not chasing megahertz, as we were in the 80s and 90s when computers proudly emblazoned their clock speeds on a front-panel display next to a permanently-pressed 'turbo' button.

Instead, we're happy to have much smaller, much more modest computers – albeit ones that weigh half a pound, work all day without plugging into anything and can communicate using an array of radio modems. Some of these computers we fit with imaging sensors and call 'cameras' and it's no longer unusual to see cameras, such the Sony's PXW-X180 which we reviewed, becoming fully-paid-up members of this society, with wireless ethernet and connectivity to pocket devices. It's common to walk around a film set watching rushes on a tablet, and there are huge benefits to this approach. The degree of research and development that's gone into, say, a Samsung Galaxy Tab, plus all the standards and protocols and technologies that it works with, is vastly in excess of anything the film industry could ever have created just for that job.

What's old is new

What this situation does, though, is to go some way to recreating the mainframe-terminal situation that existed through to the 1980s – a big, central computer which did all the work, and smaller, cheaper, relatively straightforward user-interface devices which people sat in front of. Build cameras this way and size, weight, cost, power consumption and complexity all get pushed in useful directions, at least in theory. In practice, a modern camera is a vastly more powerful computer than most of the mainframes that used to exist, but there's a certain symmetry here.

Or...well, is there? The cloud (the internet, the web, whatever you want to call it – it's a server somewhere) is often just a storage device. YouTube's servers will recompress video uploaded to them, for instance, but mainly, the heavy-lifting currently done by the cloud is related more or less exclusively to storage. A trick is being missed.

There are exceptions. For a while, renderfarm services, such as Render Rocket, have been available by the hour wherein the project files and any associated media can be uploaded and the rendered frames returned. This makes all kinds of sense. Modelling and animation time can be spent on a single workstation, with the renderfarm pressed into service as needed, and the overall amount of hardware that needs to exist (thus, the overall cost to everyone) is minimised. Of course, from a technical perspective, this only works if transferring the final media off the renderfarm and back onto the source workstation can be done quickly enough over the internet, but that's not too difficult a requirement to fulfil if we're talking about high-end VFX finals.

That, though, is a fairly crude implementation of what could be achieved with better integration. Microsoft has recently released Windows 10, with the explicit intention of addressing exactly this convergence of devices. As with many other things, such as Apple's iCloud, Google Drive and a million competitors, what this really represents is that the user interface looks graphically similar on various platforms and there is some facility for automatically ensuring all devices have up-to-date copies of files that have been modified on any of them. All these implementations seem like shallow, partial attempts at what could be achieved with a bit of thought.

Why, for instance, can I not begin editing something on a workstation, leave for the airport, grab a tablet as I go and continue on with the same task, looking at the same display, continuing to run the application on the workstation, transitioning it entirely to the tablet at will or as necessary? Why can I not simply buy a package of processing time, to be deployed as required by taxing applications on my workstation, in the same way I pay for a cellphone contract? And I'm not talking about application-specific stuff, here. I want to be able to run any application, send the appropriate parts of its memory allocation and call stack over the network, and have it run remotely.

The hurdles

Okay, these are big asks with significant technical challenges associated with their implementation, but it is possible. Virtual-machine software architectures like Java and .NET, as well as the hardware virtualisation in modern CPUs, address many of the issues with doing this, albeit at the cost of some performance, but to do it really well would require support at the level of the operating system and, ideally, a significant convergence in native CPU architectures. Done well, we could each have a single unified-desktop computing experience tied to a username and password accessible from anywhere on the global internet and almost infinitely scalable to the task at hand, optimising cost, battery life, portability, environmental footprint and, ultimately, performance. Streaming raw sensor data to the cloud and performing the debayer there is a beautifully powerful, beautifully scalable, beautifully commoditised approach which is verging on possible right now. It just needs to be organised.

Right now, we're some way from that. If I want to get a photo off an iPhone to a computer other than the one that's associated with the owner's iTunes account, the only real approach is to email it to myself. A large part of the reason for that is, again, commercial expediency. The solution is in finding a commercial imperative to do things that are good for the situation – things like Forbidden Technologies' Forscene editor, which would be called cloud edit these days, but was so ahead of its time that it predates that terminology. There are security concerns (it's the internet) and many of the things we've talked about will only become entirely practical with continued increases in the speed of the network. When it does, though, we might well find ourselves back in the 70s world of mainframes and terminals. The difference is that terminals might look like everything from teacups to cameras.

Whether we really need the wifi teacup is another matter.

Featured Image Credit: ESO

Tags: Technology

Comments