<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

LIDAR: The world seen through depth

1 minute read

LIDAR / RedShark NewsLIDAR depth camera

Depth-sensing capabilities are coming to the forefront of some camera designs, but LIDAR takes it a step further, by capturing images by depth and depth alone.

We sense depth information somewhat indirectly, through interpreting the differences in the images produced by our two horizontally offset eyes. It's not an exact science and, as evidenced by the minor craze for auto-stereoscopic images a few years ago, you can fool the system quite easily.

Absolute accurateness doesn't matter much if you're staring at a painting, but it could be a life-or-death matter if that depth information is influencing the angle of your steering wheel.

LIDAR, which is essentially a laser-based depth sensing system, produces images based solely on depth. Since it works by measuring the time it takes for light to travel to and from a distant object, readings can be pretty exact and unambiguous.

This video, reported by the New York Times magazine, via Engadget, is striking mainly because virtually all the visual information comes from remarkably accurate depth information. With normal images, luminance and colour are the primary factors, with depth a distant (I'm sure you saw what I did there) second.

Ultimately, this mirrors the way our brain works to analyse images, where brain scientists have found that there are as many (possibly more) as twelve separate mechanisms invoked to deconstruct the imprecise and ambiguous visual data that our eyes present us with.

The images are striking.

The original article is here.

Tags: Technology

Comments