<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Why Moore’s Law will continue - and will have little to do with silicon

3 minute read

ShutterstockWe are bumping up against the physical barriers of Moore's Law

Just because we are butting up against the physical limits of Moore’s Law, that doesn’t mean that we’re going to see a decline in the ever-increasing speed of computing.

I’m sitting here typing this article on a Macbook 12” Retina. I’m travelling at the moment and it’s the perfect machine to have with me: light, easily powerful enough for writing and web browsing, and the screen is fantastic. I am, of course connected to the internet, wirelessly. Nothing remarkable about that, perhaps, except that I’m currently at 37,000 feet over the Atlantic ocean, on my way to New York. 

And, believe it or not, the broadband connection is pretty good.

You can understand why: there’s clear blue sky between my plane and the satellite that’s beaming us a signal. It’s all working beautifully. 

Now let’s come back to Moore’s Law.

For some time, there have been signs that the exponential growth in the density of components on a chip has been coming towards an end. it simply has to: at the point where an individual component is so small that it is composed of only 18 atoms, we’re clearly coming to a limit. A building made from a single brick isn’t a building: it’s just a brick.

In a year or two, we won’t be able to make faster, cheaper chips by squeezing more stuff on to them. That matters because if you ignore the design costs, it doesn’t really matter how complicated you make silicon chip: it costs about the same (a gross over simplification, of course). 

But chips will continue to get more powerful, mainly because while you can’t make the devices any more complex themselves, what you can certainly do is use lots of them together. It’s getting easier and faster to interconnect multiple chips, or just to build bigger chips with more processing power on them. Just think about GPUs: the rate of growth with these is breathtaking. Some of the latest ones can even process multiple streams of 8K in realtime. 

We’ve been locked into Moore’s Law in a very narrow way for a long time now. But computing has spread way beyond single chips. Networks are getting so fast that they’re almost as good as the internal busses inside computers. Indeed, Thunderbolt is largely PCIe on a wire!

Even - or especially - wireless networks will help here. Wireless speeds are increasing and the ability to interconnect disparate devices will mean that we will arguably have supercomputers emerging out of thin air. 

Virtualisation is another technology that has traditionally slowed things down in one sense, but will speed accelerate them massively in another. Normally, if you try to “emulate” one kind of computer on another kind - let’s say you want to emulate Windows running under a Mac OS - then things can slow down as the software “translates” from one machine architecture to another. But modern virtualisation is not only incredibly clever, it allows radically different types of system to run the same software. Ultimately, if you have a virtually limitless number of computers to run your software on, then it doesn’t matter if individual instances run a bit slower. Let’s say that a virtualised machine runs at 50% of the speed of a “native” one. That’s insignificant if you can invoke 1000 instances. 

We do need to keep in mind here that none of this is remotely simple. Perhaps the biggest obstacle to a universal distributed computing resource is that not all software tasks work well in parallel; still less when they’re massively distributed.

There are new developments too. A big one is that Intel has recently bought FPGA company Altera. FPGAs are reprogrammable logic arrays that run software at hardware speeds. You can reboot an FPGA and it will behave like a completely different kind of machine. Keep an eye on this one. 

But remember that software has its own improvement curve too. Even if hardware development completely froze, we’d still see progress in software optimisation. But hardware hasn’t frozen, we’re getting cleverer at writing software (and at some point, we will ditch legacy software techniques completely and we’ll have a new era of rapid progress as a consequence). 

And then there’s AI. As long as it doesn’t kill us all (that was hopefully just a joke!) then it will help us to build better technology even faster. 

So no, things aren’t going to slow down. Classically, Moore’s Law may be purely about silicon real estate, but it will continue, one way or another..

 

CPU graphic: Shutterstock.com

Tags: Technology

Comments