It's easy to announce a new discovery in a blaze of publicity. It's harder to turn that discovery into a product. Things that turn out to be a bit less practical, a bit less useful, or just a bit harder to productise are often forgotten about. Let's look at a few things that have come up over the past few years which are still, for better or worse, languishing in the R&D lab.
More capacious batteries
Batteries. Improvements in the fundamental physics are regularly promoted, but rarely released
If there's any line of research offering a bigger pot of gold than battery technology, please let us all know about it in the comments. They're used everywhere. Cell phones are an apparently endless market. Electric cars are a rare example of a literally world-changing technology, and a lot of approaches to renewable energy are likely to require individual homes to have some reasonable storage capacity on hand. Current battery chemistry is far from ideal, with limited cycle life and at least some risk of fire. Worse, there isn't enough lithium in or on Planet Earth to make enough lithium-ion batteries for all the electric cars we're likely to need.
To say that we badly need better batteries is an exercise in world-class understatement.
Last year, we talked about a new chemistryabout a new chemistry based on glass and sodium, announced by the man responsible for the lithium-ion technology we're largely using at the moment. It's far too soon to expect results from such an early announcement, of course, but it's not the only post-millennial announcement of a new battery technology. In the last 18 years, we've even seen hydrogen fuel cells proposed for portable power applications. The problem is that hydrogen has to be isolated using an energy-intensive process, so really it functions as a storage medium. It is, therefore, effectively a battery, and it is less effective in that role than, well, batteries.
Is it ever likely there will be a significant improvement in what we can get from batteries? Well, maybe, but there are some significant barriers in the fundamental physics of how we get two different materials to give up an electron. There's also the safety argument - the specific energy of current batteries is high enough that they're a real hazard in the event of a catastrophic failure, sufficiently so that the airlines are cautious about allowing people to fly them. Then again, similar concerns apply to a tank full of highly flammable, light-fraction oil – like we have in every car.
Verdict: Desperately needed, but only plausible to a certain extent, and creating significant safety concerns.
Things continue to advance - this LTO5 tape is now old hat. But the fundamental technologies are resisting change valiantly
Better storage tends to mean faster, bigger or more permanent, and storage manufacturers have been chasing all of those things for decades. The transition between spinning metal and solid-state storage has taken longer than we might have guessed, but that's not a bad thing. Hard disks are refusing to die, specifically because a determined development effort is allowing them to continue beating solid-state memory on price per gigabyte while maintaining respectable speeds.
Presumably, solid state memory will eventually completely supplant hard disks, but in 2018 it would be hard to predict when. The problem, if there is one, is that the NAND flash memory we're mainly using is not a particularly ideal technology. The earliest common devices which looked and felt like modern flash cards were the PCMCIA devices of the 1990s. They mainly used static RAM, which requires battery power to maintain its contents. NAND doesn't, but the process used to write data to it is complex and time-consuming. This exacerbates the tendency of NAND to wear out if heavily used. Also, none of these technologies solves the problem of long-term, high-reliability data storage, which remains the last bastion of magnetic tape.
Replacements for NAND have been proposed, including resistive RAM, called ReRAM or RRAM by some companies, and ferroelectric RAM, called FeRAM. While there is some commercial ReRAM in 2018, none of these has so far become available in a form that's anywhere near the size or speed of NAND flash. In 2016, the University of Southampton released details of research into femtosecond lasers at its Optoelectronics Research Centre, achieving very high power over very short periods, which would allow data to be written into the three-dimensional structure of normal glass. The technique is dense and should be extremely long-lived, being based on very stable materials, though it is purely a research project that's a long way from being available in a form you can attach to a USB bus.
Verdict: New tech seems just behind the curve of the things it's trying to replace, which is OK, though the lack of good archival storage is a problem.
A test short was produced with Fraunhofer's nine-camera array. Here we see the nine unprocessed images
Fraunhofer has been showing their lightfield arrays at film and TV conferences for some time, and Lytro has been talking about lightfield cameras since at least 2012, though the first widespread commercial use of multi-camera arrays has been in cell phones. Using more than one camera to observe the scene, a phone might be able to derive approximate depth data sufficient for tricks such as artificial focus blur simulation. The thing is, that's really a very limited deployment of a potentially very capable technology.
The term “lightfield” refers not only to the brightness and colour of light in a space, but also to the direction that light is going. Something close to a true lightfield camera would have a sensor with photosites capable of detecting not only brightness and colour but also direction – and of course, any photosite might have photons hitting it from many directions almost simultaneously. It's hard to make a one-piece sensor capable of recording that sort of data, so the devices we currently see are best described as a sparse lightfield array. They photograph the scene from many slightly different angles simultaneously and interpolate between those views.
It's an approximation, but it's an incredibly powerful one. The depth data from current lightfield arrays is generally better than from other depth camera technologies, good enough to be used to isolate objects in a scene for grading and even relighting, as we saw back in 2015. With proper post-processing, the technique can simulate variable depth of field with real optical bokeh, not the simple blur techniques of cell phones. It can allow for six-axis stabilisation. In short, it entirely changes what a camera can do and how it works, but in a way, that's also the problem. A lightfield camera for cinema wouldn't work like they do now, with a single sensor and a lens, and that's off-putting to some people.
Verdict: Hugely promising, fantastically powerful, but film and TV is a very conservative field.
In part 2 we’ll cover VR 360 and HDR.
Image - Shutterstock - lassedesignen