17 Mar

Is this the end of Moore's law?

  • Written by 
  • submit to reddit  
The end of moore's law: Intel 4004 The end of moore's law: Intel 4004 Mats Brystrom

Index

 

 

Moore's law is an assumption which may have run its course. So what happens in the Post-Moore world? Is this the end of rapid development in digtial video? 

At the Recent Dell Precision day in Austen Texas, one of the speakers, Ben Cochran, Senior Software Architect from from Autodesk, made a point that made me sit up and slice through my jet-lagged, post-lunch narcolepsy. I'd asked a question, and his answer was quite surprising.

I'm a big fan of the theory that technology is accelerating. There are enough articles in RedShark to make you well briefed in the subject if you haven't come across the idea before, but its pretty well established and there's plenty of evidence if you look at the last fifty years, and then the fifty years before that, that things are changing faster.

Moore's law

Part of this, but only part, is Moore's Law, which isn't a law at all but merely an observation that's become a pretty accurate prediction as well. It's often stated in several forms, but essentially it is that as chips get smaller, you can pack more electronics on to the same piece of silicon. It's a virtuous circle where the more you cram in, the faster everything goes, and, amazingly, it all gets cheaper as well.

Typically, you'll find that processors and digital chips in general can fit twice as much in the same space every 18 months.

Whether you call it a symptom or a cause of exponentially accelerating technology doesn't really matter: what matters is that with each generation of computing tools, we design the next, better generation.

All of this is established enough not to require an overall justification. It just is. But when you look at smaller parts of the whole, there is a lot of variation. This is definitely a macro effect that is not in all cases valid at the micro level.

Heavy Duty

The point the speaker made was that in past times (maybe five to ten years ago and longer than that) if you wanted to increase your computing power - let's say you wanted to render something faster - you'd have to get a more powerful machine. This is because the heavy duty applications that need heavy duty computers were written for a single processor. To make things work faster, you'd need a faster computer. And that's where it gets expensive; just like with cars.

It's easy enough to make an ordinary car do sixty, or even a hundred miles an hour. But if you want one that will do two hundred, it's not just going to cost twice as much as a car that will only do one hundred. It will probably cost ten or twenty times as much. And if you want one that will do four hundred - it's going to have to be a special design, probably with a jet engine, and the chances are that it will cost millions.

Typically, with computers, there's more to running fast than turning a big, LED illuminated knob labelled "Speed", and that's why, with a single computer, the faster it gets, the (exponentially) more it costs.

But we've passed through the era where we're turning up clock speeds all the time. Since about 2004, this hasn't been the preferred way to get more processing power.



« Prev |


SIGN IN TO ADD A COMMENT:

 

 

Not registered? Sign up now

Lost your password?

  • There is one other factor here, although I can only speak from the point
    of a Linux Operating System developer. The Linux Kernel is becoming
    ever more efficient. For me that means given the same hardware platform
    I see faster rendering times from one generation of Linux OS to the
    next. The latest Kernel generation, 3.11, features higher performance
    and efficiency when measured against past versions. I would suggest
    that at least in the Windows world that is not the case. I can use
    hardware platforms designed originally for the XP operating system
    and reuse them for processing intensive tasks when they are converted
    to a Linux OS based on 3.11.

    0 Like
  • A nice article David, and an interesting one in reply from the speaker to your question. I have also figured that at some time Moores Law would take a side step, as we run out of "speed" technology, like a single booster of a rocket, and use extra "boosters" to do the extra work, thus giving us more "speed" with lower costs, my reason for putting the word in quotes.

    William Kerney replying here also makes a very valid point, of the differences between the way the Windows Hive works as opposed to the 'nix's kernels. That's why the 'nix's have proved themselves in high mathematics, science, weather and astronomy types of processing.

    I feel that as we walk (run!!!) down the years with technology, efficient and clever coding wll play a big part along with the hardware, the parallel processing as you mentioned, to actually offer unlimited performance. The other issue with "speed" is distances, and getting an electrical signal filled with binary code, to move (remember the speed of light is the current fastest we know) around the world will also be the challenge, compared to the current speeds and bottlenecks. Maybe we are approaching the movie SciFi positronic matrix one day to combat distances.

    Regards

    0 Like
David Shapton

David is the Editor In Chief of RedShark Publications. He's been a professional columnist and author since 1998, when he started writing for the European Music Technology magazine Sound on Sound. David has worked with professional digital audio and video for the last 25 years.

Twitter Feed