Building some sort of cinema camera is no longer the multi-year process it once was. Sensor modules and compact, high-power processing electronics are practically off the shelf at this point. But let’s emphasise: some sort of cinema camera. Making a good one is still highly non-trivial, partly because it’s difficult to get exclusive access to cutting-edge sensor technology, partly because good colour science is a matter of opinion, and partly because a nice user experience is a complex and demanding thing. It’s been tried, a lot. Let’s see how Octopus do.
Current images depict the Octopus camera as a nicely-machined, white-finished chassis with a lens mount on the front. So far, so camera; current advertised specifications describe a 5K full frame shooting up to 48fps at full resolution and 100fps at 3K, and a 4/3” format 4K camera capable of 70fps at full resolution and 240fps at 3K. Both options are described as being capable of twelve stops of dynamic range, and while this may seem less than competitive with current fifteen-stop cameras, they’re both global shutter. That invariably costs dynamic range, since the additional electronics required for global shuttering take up space that could be used for bigger photosites, but at twelve stops it feels like a good compromise for a yearned-for feature. Nobody likes rolling shutter.
The headline recording format is Cinema DNG in, optionally, its losslessly-compressed guise. There are two ways to look at this: there were once many cheerleaders for the idea of recording one file per frame, on the basis that a sudden power outage wouldn’t necessarily destroy a whole take, and because it’s easier to retrieve only the required frames from long-term storage when assembling an online edit. On the other hand, handling a take as a single file has a certain convenience and has compatibility with a much wider variety of tools (at least for non raw formats; there’s only one file-per-take raw format under current development that isn’t firmly associated with a camera manufacturer.) In computer science terms, it’s really splitting hairs: there’s not much difference between a folder full of frames and a file full of frames.
It's a computer at heart
And yes, it’s a computer. The current Octopus camera is pretty clearly a combination of an Intel NUC 717 motherboard running some variant of Linux and a PCIe-interfaced sensor module from Ximea, featuring either the AMS CMV20000 or Sony IMX253 sensors for the full-frame and 4/3 options respectively. SATA-III and USB are used for storage, implying that a wide variety of flash formats might be usable. The NUC series are motherboards sized similarly (but not otherwise similar) to the classic PC/104 and derivative hardware which was intended for embedded applications very much like this; this one packs a sturdy Core i7 8650U CPU. At a hair over four inches cubed and under two pounds in weight, the camera is a compact item.
This design approach uncomfortably recalls the rather dubious Cinemartin Fran, which was built along similar lines. Let’s be clear that lots and lots of things, from well-known cameras to bits of rack-mounted post gear, are built as embedded computer systems. Perhaps crucially, it means that the current hardware may lack things like SDI outputs, conventional genlocking, timecode, and other staples of production tech to date. Whether those things are essential is somewhat a matter of use case. Timecode is only a matter of analogue audio I/O, though jam sync is harder to do in software.
What this implies is that the capability and usability of the thing end up being very much functions of the software, which is something we’ve seen nothing of. Perhaps the most important caveat of currently-released material is that it all uses the monochrome sensor options. This allows a manufacturer to offload the heavy lifting of a final demosaic to other software, which is an approach that’s been making cameras simpler, smaller, lighter and less power hungry ever since the SI2K.
There are, though, two reasons that any camera of this sort needs to do an onboard debayer: first, there’ll need to be something for monitoring. Second, some of the literature mentions at least the idea of using a very popular and capable open-source library for encoding the video, which would make a very wide variety of codecs available. Recording anything other than raw would, of course, require a final-quality onboard demosaic. Whether the NUC hardware is up to doing this, especially at 5K or higher resolution, in realtime, remains to be seen.
Showing monochrome footage, of course, doesn’t demonstrate any sort of demosaic or other colour processing choices, and that’s quite a big chunk of the required functionality that we haven’t yet seen. Neither have we seen anything of the user interface; what we have seen is silent monochrome footage which really only requires taking the frame data from the sensor module and wrapping it in a DNG file. Much more may already have been done, but we haven’t seen it.
A release date of effectively this time next year is given. Assuming the colour versions of those sensor modules are available to the company, what lies between now and then is a lot of software development. Certainly, the use of commodity hardware should make improvements easier, and the use of an open-source encoding library might mean that a wide range of codecs quickly becomes available. The success of the camera will depend largely on the quality of what comes out of that process.
The Octopus camera is, then, something filled with promise that we can only hope will be fulfilled.