<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Understanding camera to cloud workflows

Pic: Frame.io
4 minute read
Pic: Frame.io

Camera to cloud workflows are now accessible to almost every filmmaker. But how do they work, and what are the benefits they bring? David Shapton explains.

Do you remember something that seemed impossible, and then time passed, and suddenly, you realised it was possible? Maybe it was your getting your first car, your first house, or that job you’ve always wanted. Then, almost without noticing, you found that you could afford it or that your small, incremental promotions meant you were a shoo-in for that perfect job.

That’s what’s just happened to camera to cloud. Ten years ago, it would have seemed like a dream because, at its most basic, it’s about shooting a film with no storage in the camera. But now, we’re starting to see something like this happen. Cameras still need onboard storage for the original cinema quality file, but it’s now possible to have a direct-to-cloud workflow with smaller proxy files, which, after editing, can be matched to the original camera files. This capability has been creeping up on us for a long time. It’s just that most of us didn’t realise how close it had become.

You still can’t send your uncompressed 8K video files down a legacy telephone line. But you can send high-quality and usable proxy files directly from the set as soon as you’ve finished shooting a scene. And sometimes, while you’re still shooting.

Proxy is a word thrown around a lot, and a proxy file’s characteristics will likely depend on your exact circumstances. If you’re old enough to remember the early days of NLEs, the computers at the time weren’t powerful enough to edit full-resolution video files - called “online” files. Instead, they worked “offline” on what we would call these days Proxy Files. Proxy means something that substitutes for something else but is in some significant way functionally equivalent. The equivalence with proxy files is that they have the same timecode - the same countable number of frames. These files are much smaller and easier to process, and as a result, they don’t have the same quality as the originals. But they’re good enough to ensure accurate lip sync, composition and the overall flow of an edit.

Today, NLEs can handle full-resolution video files with aplomb, but you still need proxies for cloud workflows - most of the time.

Making it work

atomos connect

Kit like Atomos Connect is going to be a key part of getting data out of many cameras in current c2c workflows

There are a few prerequisites. First, and obviously, you must be able to capture the video. Most cameras don’t have built-in proxy file creation or internet connectivity, so you need additional hardware. Specifically, you need to be able to capture either the uncompressed camera’s output from an HDMI or SDI port or the RAW footage, again, probably over HDMI or SDI. You can capture directly into a computer with suitable I/O or choose something portable like Atomos’ Ninja and Shogun monitor recorders to capture the original footage and generate a proxy file. Atomos Connect lets you connect to fast WiFi and participate in all kinds of cloud workflows, including camera-to-cloud. One huge advantage of this approach is that you can use virtually any type of camera in a camera-to-cloud workflow. The setup handles audio, too, so there’s no danger of losing audio sync while editing.

Depending on your chosen system, you can either upload your proxy files to the cloud once they’re captured or have them upload while you’re shooting. This latter method means remote editors can start work within seconds of the action.

To some, this setup might still sound complicated. It may also sound unreliable. We’ve all grown up during the relatively short history of the internet, and glitches, dropouts and complete connectivity failures have formed indelible memories in all of us. But camera to cloud has been engineered around this and is, in fact, extremely reliable.

Example workflow

Adobe’s Frame.io has grown into a fully-fledged camera-to-cloud ecosystem. It was designed with the internet in mind. It has extraordinary levels of data security built in and is designed to handle the logistics and vagaries of the internet workflows. It’s actually one of the safest ways to handle valuable media, especially when you use it in conjunction with an ecosystem like Atomos’. Here’s why:

Let’s assume you have a camera with (say) an Atomos Ninja Ultra, an Atomos Connect, and a Frame.io account. Let’s also assume you have everything set up for a camera-to-cloud workflow.

Three things start to happen as soon as you start capturing a scene.

  • Your camera acts like it usually does and creates and stores a RAW version of the video file. This is your “master” file. It’s the version that has the highest quality recording of your scene. This is stored exactly where it has always been - on the camera’s internal storage.
  • Next, your Atomos Ninja or Shogun captures a version of the scene and stores it as a production-quality file. This could be Apple ProRes, a camera-native RAW file, an Apple ProRes RAW file, or Avid DNxHD - depending on your chosen options.
  • At the same time, your Ninja or Shogun generates a lower-bitrate version of the file, which it will store for later upload to the cloud or will upload in the background as shooting takes place; this will depend on your setup.

So you have three copies of the original scene, two at production quality. The third is the proxy.

  • Once uploaded, the file lands in Frame.io’s ecosystem, where it is managed and fed into Adobe Premiere for editing. This could be within minutes or even seconds of the shoot starting.
  • Once editing is finished, you can re-connect the original camera files or the production quality.

Cloud workflows, in general, and camera-to-cloud in particular, are becoming an everyday reality for filmmakers. Now that the technology and services exist for practical, affordable end-to-end production, they are likely to see rapid adoption. But this is only the start.

The future evolution of camera to cloud

cloud play button

Pic: Shutterstock

Bandwidth and compression technology will continue to improve. At some point, in the not-too-distant future, there will be no need even for proxy files. In a way, we have already arrived at that point, but from a different angle. Some devices can now produce an H.265 proxy file at a higher bitrate - still massively smaller than ProRes and the like - that is good enough to use for delivery. You probably wouldn’t master a cinema film from these new, more detailed proxies, but you certainly can post them directly to social media or even to news broadcasts. To viewers on smartphones or a typical domestic TV, they would look a little different from what they’re used to seeing.

This puts the camera to cloud into a different league. It means you don’t have to re-link your original camera files (although you still can later) and can publish your edits immediately after an event or even during it, depending on upload speeds and whether the production end of the workflow supports simultaneous uploading.

Over the last couple of years, cloud production workflows have progressed from being a proof of concept, through being taken up by early adopters, to a robust and stable way to speed up video production that’s affordable and accessible by anyone, anywhere in the world.

Tags: Cloud