<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

MIT’s camera captures the speed of light

2 minute read

MITBearer of light

MIT has developed a camera with an effective exposure time for each frame measured at two trillionths of a second - all of which could have some interesting implications for the rendering process.

For cinematography pioneer Louis Lumière, even 24 frames per second would have been a luxury. Now we have 4K cameras capable of capturing images at 1000fps. But even that pales into comparison with technology developed at the Massachusetts Institute of Technology (long a pioneer of high-speed photography), which records footage at such high speeds even the movement of light is captured in intricate detail.

To achieve this, the super-fast camera photographs scenes at a trillion frames per second. At this speed even a nanosecond can be slowed down to 20 seconds, and so the movement and behaviours of light particles - which travel at approximately 186,000 miles per second, or 300,000 metres per second - can finally be observed in incredible detail.

The system utilises a process known as femto-photography, which - rather than capture a whole scene in a single pass - works by firing a single pulsed laser beam into the scene and the capturing it up to 500 times, recording a single ‘slice’ each time via a rotating mirror. By stitching together all these successively recorded slices, the camera then creates what appears to be a recording of a single event.

MIT has already started developing the system as a data capture tool, using software to process light scattering in an area to then rebuild the scene as a 3D model. Here the way photons bounce off surfaces effectively make it possible to ‘see’ around corners - something that may have prove significant for fields such as sensor systems.

While the nature of system means it’s not likely to filter through to standard camera technology in any meaningful way, the technology might ultimately prove useful in filmmaking via a role in visual effects. Currently digital effects crews utilise a mix of HDRI stills, chrome/grey balls, and sometimes LIDAR to record on-set data for later use in post-production. A camera capable of capturing the exact behaviour of light on-set could prove extremely valuable when rendering and lighting digital elements for integration back into live plates.

It’s also possible that the developers of renderers may find value in a tool that depicts light transport so comprehensively. Raytrace and other types of renderers all work to simulate or approximate the complex behaviour of light as it moves, scatters, and reflects many times in 3D space space. All do so by taking mathematical shortcuts of one type or another, in order to find the best possible compromise between image quality and render speed. It’s not to hard to see how the ability to compare against real-world lighting recorded at a trillion frames per second could aid the refinement of such tools.

Tags: Technology

Comments