<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Intel Cannonlake 8-core and the future of the PC

3 minute read

Intel / Redshark NewsIntel Cannonlake 8-core for consumer desktops?

There's buzz around a possible Intel Cannonlake 8-core processor for consumer desktop computers. Our Technical Editor tackles why software engineering difficulty increases with the number of cores and imagines what it would take for a more elegant PC computing solution.

It's been clear for a while that making computers faster by increasing clock speeds is reaching the upper limits of practicality. The solution to date has been more cores – parallelism, where we have several sets of identical resources, all working together. This creates all kinds of problems and opportunities, but rumour currently abounds that Intel is planning an eight-core CPU in its Cannonlake range that's aimed at the home and desktop market; a market slot that's currently full of four- or, at best, six-core parts.

Cause for pause

Now, there's some need for caution here. The buzz is founded on a snippet of text from a LinkedIn page, written by an Intel design engineer, which has been perhaps overanalysed. We're months or years from a product announcement even if the buzz is accurate. Of course, eight-core CPUs have been in existence for some time, albeit at several times the cost of the processors typically used in desktop systems that drive the lighter end of postproduction.

It's also been possible to build 16-core systems around Intel's Xeon range for some time, although the expensive processors require even more expensive motherboards and memory. The resulting Intel Xeon 16-core machine might be ten times the price of a more conventionally-specified home PC, but only three or four times the speed. If Intel has figured out how to make eight-core CPUs at a price that's feasible for the home user market, nobody will be complaining. There are, of course, alternative interpretations of the available information that suggest Intel hasn't cracked the problem. With a lamentable lack of effective competition, we'd be forgiven for assuming Intel has very little incentive to innovate in this regard.

Complexity of multi-core

Either way, this does raise subsidiary issues. The first is simply that we are probably not making the absolute best possible use of the multi-core processors. Software engineering has always relied on lists of tasks to be done one after the other. When CPUs went to two cores, it was reasonable to expect the average computer to be able to come up with two completely separate, simultaneously-running programs to take advantage of them. With four or eight cores (and even more not-quite-real cores through hyper-threading), this is no longer so obvious; modern software engineers must write code to expressly divide up tasks which can all be done at once. The more cores there are, the more difficult this becomes. The approach taken by GPUs, as beloved of people using Resolve, is different: each of the sometimes thousands of cores does exactly the same operation on different data, which is extremely useful for a certain subset of tasks and completely useless outside that subset.

Whether or not Intel is about to release a cheap eight-core CPU, there are other problems to solve. It's very difficult to recognise which parts of a piece of software can reasonably be run simultaneously. A lot of work is currently being done on ways to automate this. The second, broader concern is the issue of whether we need to change our entire approach to computing – whether we need to differentiate between the large numbers of simple cores on a GPU and the small number of complex cores on a CPU and how we allow these resources to communicate with each other and the rest of the system. If we were willing to start from a completely blank-sheet design, PCs would not be built the way they're currently built.

We can imagine a future in which the PCs we have now exist as a backward-compatibility layer under something more elegant. One particularly attractive idea is that of plug-in expansion cards containing processor resources of variable core count and core complexity, all operating together as equal partners in a computing environment that could be tailored to the job at hand. Current techniques with general-purpose GPUs could be seen as baby steps toward this sort of flexibility. It would take fundamental advances in software engineering technique, but regardless of anyone's grand ideas, there's an even more key problem: all of the world's best software is written for what we have right now, which makes changing things an absolutely monumental undertaking.

Tags: Technology

Comments