<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Driving autonomous vehicles: what Light did next

3 minute read
Shutterstock

Long-standing readers will remember the Light camera company whose L16 camera promised much but ultimately failed to deliver. Now the company is back and its driving cars…

The Light L16 was one of the most hyped camera releases of all time. And with good reason, a cellphone-sized camera using 16 separate lenses and 16x 13MP sensors is the sort of gadget that would make headlines now, never mind in 2017 when it was released. However, performance of the $2000 unit never really lived up to the hype and though the company tried to break through to the smartphone market with the Nokia 9 PureView, that too failed to ignite the market and Light sensibly decided to exit the smartphone industry.

Some of the technology the company developed was too good to waste, however, especially its ability at depth perception. When the Nokia 9 PureView was released in 2019, its most capable competitor at the time was the Pixel 3 which could compute two layers of depth from its single camera. The Nokia’s five cameras allowed it to calculate a staggering 1200 layers of depth, and this was very interesting to the automobile industry.

Autonomous vehicles are the current holy grail of the automotive industry, with a vast amount of resources being pumped into the industry to get them safely up and running. There are, of course, all sorts of issues surrounding self driving cars, however, with stories such as Tesla’s rolling back of its latest Full Safe Driving software not exactly raising confidence. In truth, the hype around such vehicles doesn’t measure up the the reality. Tesla admitted earlier this year that its systems are only at Level 2 on the autonomous vehicle scale published by the Society of Automotive Engineers; essentially a semi-automated driving system which requires supervision by a human driver. 

Moving beyond lidar

We are, of course, still in the very early days of autonomous vehicles; not so much at the on-ramp but turning out of our driveway and onto the suburban street. But one thing that the vast majority of vehicles that have been taken off the drawing board and into even prototype production have in common is that they use lidar.

Lidar basically works the same way as radar does but uses a laser as the source. The laser targets an object and the time it takes for the reflection to bounce back to a sensor is measured, building up a point cloud 3D picture of the world in front of it. It has a huge heritage in autonomous vehicles and was the technology that helped guide Stanley, Stanford Racing’s successful entry in the 2005 DARPA Grand Challenge and the shining light for that first generation of robotic vehicles. Stanley completed a 200+km course in just under 7 hours, navigating three narrow tunnels, over 100 sharp lefthand right turns, and numerous challenges along the way.

But lidar has its limitations, notably range, which even in good weather conditions rarely gets above 250 metres. Given that the stopping distances of small, fast-moving vehicles and large, slower moving ones is often 250 metres and above, that limits its deployment significantly. So-called perception discrepancies were responsible for 38% of Google’s autonomous disengagements (ie a swift return to manual mode) in 2015, and though that figure across California at least was down to around 9% by 2019, that’s still fairly slow progress when the consequences can be fatal.

Which is where Light comes in. At the tail end of 2020, Light introduce a new system targeted at the auto industry based on its depth perception technology called Clarity. It reckons that this camera-based platform can see any 3D structure is the road from a distance of 10cm to 1000 metres and at 20x the detail of the best-in-class lidar systems. And, as the company says, for every 100 meters of added perception, a vehicle gains an additional four seconds of time to slow down, change lanes, or alert the driver to take over.

The pic below shows the difference between a lidar output on the left vs a Clarity one on the right.

5f96f4ab9e722308de405a3c_Light-PixelDepth

The cameras — off-the-shelf, auto-grade units — run at 30Hz and for each frame put the images from two to four units together to determine the depth of all the objects in the field of view, calculating about one million points per frame. Two cameras is the minimum, four gives the full 10cm to 1000m range in a configuration that runs two at a longer focal length, two at a shorter one (thankfully no-one is talking about 16 of them). This also provides some redundancy in case of equipment failure or a lens gets covered with bits of road murk in operation.

Levelling up

Talking about murk, there is a lot of regulatory murk surrounding autonomous vehicles too which is stalling their deployment on public roads. So, while the Audi A8 justifiably claimed to be the first Level 3 self-driving car when its Traffic Jam feature debuted in 2017 (can make informed decisions for themselves, such as accelerating past a slow-moving vehicle, but human override is still required) the much publicised feature was never actually rolled out and was canned in 2020.

Light is hoping that its Clarity system will help unblock the regulatory jam and make vehicles that much safer; safe enough that we can skip Level 3 altogether and shoot straight for Level 4 (high driving automation), exemplified by test vehicles such as Alphabet’s Waymo robotic taxi service in Arizona.

Thus, as arstechnica reports, the company is currently talking to some eight industry partners and hopes to have 11 trials underway by the end of the year, including with Level 4 semi trucks. Vehicles will be on the road in three to four years if the regulatory environment approves them; too late for this supply chain shortage but maybe in enough time to help alleviate the next one. Which is not a bad achievement for a company that tried to make a camera once upon a time…

Comments