<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Would you want to control your camera from an iPad?

4 minute read

MackieMackie DL1608

You wouldn’t expect the video business to take lessons in innovation from a company that makes low-cost mixing consoles. But, in the case of the Mackie DL1608, it probably should.

It’s a mixing desk where the entire control surface has been replaced by an iPad. It’s what you’d see on the recreation deck of the Starship Enterprise if they were having a live gig. So what’s the iPad actually doing in this arrangement? It certainly doesn’t have 16 channels of audio I/O. It simply control all the functions of the hardware mixer. Everything you’d expect to find in a conventional desk is present in the Makie hardware: Mic amps, effects, dynamics (compression) and all the A/D and D/A that you need to go to and from the analogue audio world.

All of which might sound a bit mundane. In fact, you could be forgiven for thinking that this is nothing but a cost-saving measure. Faders, knobs and potentiometers are expensive and difficult to manufacture: get rid of them and you’ve probably halved your production costs.

But that’s not the point - although it is significant.

A better interface

What really matters is that by using a iPad as the main interface the mixer, the device as a whole becomes almost infinitely more flexible.

The iPad is not fixed permanently to the mixer. If you remove it, it communicates wirelessly with the “base unit”. Immediately you can see the advantages. You can take the controls of the mixer to the back of the room where you can hear the sound properly - from the same audio “viewpoint” as the audience.

But, even better than that, you can connect multiple iPads: up to ten of them. You can assign each iPad a different set of control functions, so, one person can control the input levels, another the effects, and each musician can adjust the feed into their own monitor speakers.

There are nice “appy” touches, like being able to put band members’ photos beside each fader, and of course “scenes” can be preset ad infinitum.

So, all very good; but what does any of this have to do with video? Quite a lot, actually.

Building blocks


It’s all about Topology: the basic blocks in a piece of equipment, and how they’re connected. It’s really important to distinguish your content (which is normally a high-bandwith signal) from your controls (or instructions). 

MIDI is an easy example, again from the world of Audio (well, music, actually). MIDI was designed to connect electronic keyboards to sound sources. The building blocks are a keyboard, and a box of electronics that makes the sounds. Before MIDI, you’d have some kind of proprietary connection between the two. You could no more take a keyboard from one synthesiser and connect it to the sound-generating circuits of another than you could sew a cat’s head onto a duck. Either process would be messy and wouldn’t work at all well. 

The designers of MIDI figured out the essence of this communication channel. The information you need is the note(s) that have been pressed, how fast they were hit, and, if the keyboards supports it, how hard it was pressed after it was hit. There are also sub-channels for other metadata like modulation (eg Vibrato), pitch-bend, and so on. Finally, there are “System Exclusives” - spaces in the data stream to send proprietary information between the devices in the conversation (you might have some controls on the keyboard that can adjust the parameters of the sound generator, for example). 

That’s all there is to it. It’s pretty simple - and incredibly effective.

You can pretty well take any MIDI keyboard and plug it into any MIDI sound device (or another keyboard) and - a quite stunning innovation, actually - it will just work. Since it was introduced in the mid 80s. MIDI has virtually spawned a new industry of keyboard controllers, sound modules and computer software to organise and record MIDI sessions.

Remember, with Midi, you’re not transporting audio: all you’re doing is sending metadata about the performance. The bitrate of MIDI is very low. The bitrate of CD quality stereo audio is around 1.4 Mbit/sec. MIDI’s data rate is around 31 Kbit/sec: tiny in comparison. 

The moral of this is that control data is much easier to move around than the media itself. That’s why in the Mackie mixer all the audio processing is done in dedicated Mackie hardware, and the iPads only handle control information. Its the best of both worlds.

So, what does any of this have to do with cameras?

A new approach to cameras


Again, look at the topology. A camera consists of:

  • a lens
  • a sensor
  • a bunch of electronics (that nobody except the manufacturer knows much about. Typically all we know is that it takes the raw information from the sensor and turns it into recognisable video)
  • various input and output sockets
  • a viewfinder
  • the controls
  • a recording system

It makes sense for most of these elements to be together. You could perhaps separate the electronics from the lens and sensor, perhaps to get a smaller distance between two lenses in a 3D rig, and external recorders (like those from Sound Devices, Atomos, BlackMagic Designs etc) are already “pulling the camera apart”, in a sense. 

Look at the average professional camera, and the controls are all over the place. Sony’s popular FS 100 is quite literally a box covered in controls. Replacing physical controls with menu options on a tiny screen can make matters even worse: the more complicated cameras get, the harder it is to things in the menus. How you set up and control a camera is a massive areas for improvement.

And this is where the iPad (or any kind of tablet, for that matter) comes in. 

If you were to replace the controls on a camera with a remote, touch-screen device, there would be a multitude of advantages.


 

  • Easy to use (and access) controls
  • A much better user interface
  • Better visualisation of parameters
  • Completely updatable
  • Easy to enter metadata

And if new high capacity wireless networks networks like WiGig take off, we can add:

  • Use a tablet as a high-quality viewfinder
  • Close interaction between the viewfinder and the controls

What do you think? The new iPad has a screen with about a million more pixels than a "mere" HD monitor. Would you like to see complex and unintuitive camera controls ported to the iPad? Would you trust a wireless link for controlling your camera? 

We think it’s a direction worth exploring.

Tags: Technology

Comments