21 Apr 2018

What happens after Moore's Law? It's not what you think!

  • Written by 
  • submit to reddit  
After Moore's Law? After Moore's Law? RedShark News

Index

This was a hugely popular article when it first appeared. Here's another chance to read about what might happen after Moore's law!

Moore's law, the variously observed phenomenon that the number of "transistors" you can fit on a given size of silicon chip doubles every 18 months, is about to expire. It has to. If it didn't, then within a decade or so we would have components that are smaller than molecules, and we know that isn't possible.

This empirical law, which isn't actually a law, because nothing has to abide by it, has been the single most important driving factor for our digital civilisation for several decades. It's because of it that while the slide rule ruled for 340 years, in the 40 or so years since the calculator wiped out its imprecise analogue predecessor, our calculators today (we call them smartphones) are arguably a billion times faster.

You can't have that sort of rate of change for half a typical lifetime without some fundamental effect on the way that we do things. And there has been. Has anyone heard of the internet?

There is a process that accelerates technology even without specific hardware breakthroughs. In other words, if all progress in building bigger, faster chips were to stop today, then we would still have accelerating technology for some time to come. And it's this phenomenon, rather than sheer speed increases, that has been driving forward camera development recently. (Although speed is still important - for example, you need very fast processors to deal with 4K video coming off a sensor).

Advancement through algorithms

The first process is that algorithms are getting better. This isn't entirely surprising, because if you take a group of clever people, the more they think about a problem, the more elegant solutions they'll find for it. So the technology could stand still and, yet, the answers still come faster and more accurate.

And it doesn't just depend on people. As computers and computer techniques get better, they can contribute to the efficiency of algorithms, too. Don't forget that one of the principles driving accelerating technology is that each generation of high-tech tools can be used to make the next, better generation. This is a virtuous circle where problems and issues are solved with ever increasing acuity and speed.

A better debayering algorithm is going to give you better video and probably faster, as well. Clever people will invent better ways to solve problems and when they come up against a brick wall, they will use computing power to give them new data on which to base their new solutions.

You could say that one proof of this principle is that when your smartphone is updated with a new version of its OS (say iOS or Android), it becomes a new phone - able to do more and do what it does better. The same goes for cameras, too. It's quite common now to buy a camera and then, six months later, find that there's a firmware upgrade that gives it new abilities: same hardware, better software.

There's another way things can improve without better hardware. It's the same principle as the way we learn things as humans.

How we learn

When we start to learn about the world and our environment, we do it in stages. We learn something small and then learn something else small. We store these unconnected areas of knowledge and then - sometimes - we see a connection between them. We somehow manage to see the smaller areas of knowledge from a higher perspective and notice things that weren't there at ground level. All of our knowledge is hierarchical like this. You can see it at work as people learn things.

I worked with a guy once who was a great mechanical engineer but who just didn't "get" computers. This was admittedly in the days before Windows, when you had to type cryptic commands at the prompt. For some reason, he was a slow learner when it came to PCs. One day, I realised why.

He saw me typing "DIR" into the computer. For anyone born since about 1990, this might seem arcane, but it was the old operating system command to simply list all the files in a directory (which is what folders used to be called). My colleague looked fascinated, but puzzled. He said, "Do you have to type those letters in the same sequence all the time? How on earth do you remember that?"

What he didn't realise is that these were words. They weren't seemingly random sequences with about as much chance of remembering them as a printout of modem line noise (that's something else that is dating me severely).

It takes a leap to realise that once you know that something is a word, you just have to remember that word and not the individual sequence of letters, because that sequence is already in your brain's database.

And then you learn lots of words, or "commands," and your ability grows very quickly.

I'm labouring this point because it is an important one. It's about hierarchical learning and this is how machines in the future, as well as humans now, learn.

From the ground to the helicopter

In fact, this is how our brains work. We learn to recognise things at one level and then we look at them at a higher level and see patterns and correspondences. As we learn to deal with the low level stuff, we move up a level. Eventually, we deal with quite abstract concepts. Here's an example:

At one level, we see a round object. At the next level up, we see several other objects all with different shapes, all of which we recognise. Further up, we realise that when these familiar objects, in this spatial relationship to each other, represent an eye. At the same level, we're assembling all the parts of a nose to recognise another important feature of a face. Move up another level and we're looking at a face. Up again and we realise we're looking at our wife. Going up several more levels (taking into account her expression, her body language etc), we realise that she's happy. At a still higher level, you realise she's happy because this is the first time you haven't forgotten your wedding anniversary. (I'm grateful to Raymond Kurzweil for this type of illustration)

We can see exactly how this works. And today, we're able to analyse brains in more and more detail. It's this type of "learning" that is also driving technology. Perhaps the best example is the way that Google is able to take a "helicopter view" of the world, not just with Google Maps, but with all that data that they have. With so much information that they (and Facebook) have about their users, they can spot trends that wouldn't have been visible from ground level, perhaps detecting patterns in the weather or even correlating economic activity with diet.

Keep this idea of small areas of knowledge feeding into a "bigger picture" as we talk about how technology seems to be moving so quickly.



« Prev |


David Shapton

David is the Editor In Chief of RedShark Publications. He's been a professional columnist and author since 1998, when he started writing for the European Music Technology magazine Sound on Sound. David has worked with professional digital audio and video for the last 25 years.

Twitter Feed