<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Hidden chip in Pixel 2 is a huge leap forward in video technology

2 minute read

GoogleA new dedicated IPU dramatically improves HDR+ processing

Revealed only a couple of weeks after the new phone’s launch, the Pixel Visual Core inside Google’s new phone is dedicated on-board machine learning hardware for taking HDR+ pictures.

On-board silicon is a very definite growing trend in the mobile world as, in the words of ABI Research, companies fear being “left behind as AI rockets beyond news headlines to both practical application and market interest.”

And ABI is name-dropped here rather than anyone else because it has identified a definite growing trend in on-device machine learning (which it also refers to as edge processing and/or edge learning) in everything from earbuds to cameras. It reckons that only about 3% of AI processing will be done on-device in 2017 with the balance taking place in the cloud, but that will rocket to 49% by 2022.

That’s 2.7 billion devices. Which is a lot.

Smartphones are inevitably going to represent a lot of that number and the news that Google has developed its first mobile chip and then buried it initially unheralded in the Pixel 2 is all part of that upward curve. Well, the initially unheralded part was perhaps unexpected, but as the company is planning to turn it on via software updates in the coming months that makes a little more sense from a marketing angle at least.

At the Pixel Visual Core’s centre is an 8-core Google-designed Image Processing Unit (IPU)—a fully programmable, domain-specific processor. Each core can deliver 512 arithmetic logic units, culminating in a raw performance of more than 3 trillion operations per second — all of which it accomplishes on a mobile power budget.

This, reckons Google, means that HDR+ can run 5x faster and at less than one-tenth the energy than if it runs on the normal application processor.

There’s an interesting wrinkle to all this. To make the processor as efficient as possible, more control than usual has been handed over to the software, which in turn makes it more of a challenge to program. As a result, the ISP uses task-specific languages, Halide for image processing and TensorFlow for machine learning, while a custom Google compiler optimises the code for the underlying hardware.

The bad news for developers is that will make things a bit harder to program, the good news is that they will be able to at all. Google is throwing the keys to the Pixel Visual Core wide open and in the coming weeks will enable it as a developer option in the latest version of Android, Oreo 8.1. Later, it will be enabled for all third party apps, meaning anyone and everyone who wants to will be able to drive the Pixel 2’s HDR+ tech.

The result will hopefully be some gorgeous images and that probably won’t be the end of it either. As the Pixel Visual Core is programmable, more image enhancements and more machine learning apps will be able to tap into its capabilities.

And while the camera capabilities of phones represent an important USP for manufacturers and we can expect to see more and more on-board silicon thrown at computational photography in the near future, we can also expect to see more dedicated silicon developed for specific use-purposes. The chip architecture of phones just five years hence could be very different from the ones we hold in our hands today.

Tags: Technology

Comments