<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Speed or reliability: Which is most important in a computer?

4 minute read

Digital Storm

I’m surprised that this question isn’t asked more often. It’s a big deal when you’re choosing a computer.

First of all, I need to define what I mean by reliability. I don’t mean a tendency for keys to get stuck. I don’t mean any kind of mechanical flakiness. What I mean is the likelihood of your computer crashing.

I’ve worked with computers for a very long time, and for much of that, I’ve been involved in audio and video. While I don’t often roll up my sleeves these days and build a PC from scratch, some things don’t change. Here’s what might be the biggest one that’s still with us: pushing for the highest speed can lead to unreliability.

Things have improved a lot over the years. Some of the most mysterious crashes were caused by cables. I remember about 15 years ago when I worked for an NLE workstation company in London there was a configuration that acted really strangely, despite following all the rules that we’d formulated over several years, and despite the experience of the people building the computers. The issue turned out to be the length and positioning of the IDE ribbon cables within the computer’s case. The case was a new type: very spacious and easy to work with. It had a strong power supply and it was easy to get to everything.

But the problem was the size of the case itself. With the speedy drives we’d chosen, the parallel IDE cables couldn’t keep the data intact. The cables were too long. It was a stark reminder that even digital data depends on the physical and electrical properties of the medium carrying it. When an electrical cable is unsuitable in some way for a digital signal, a binary pulse can end up looking like a gentle ocean swell. Parallel cables make matters worse, with crosstalk between the wires and the likelihood that adjacent signals will arrive out of alignment (or at least beyond the tolerance of the electronics receiving the signal).

Other issues

The other big source of crashes was overheating. We had relatively few computers coming back faulty, but a fair number came in for a service when they were one or two years old. And of those, quite a few came in with a fault description like this:

“Works for a while, and then crashes”. To which you could quite often add “every time”.

Almost inevitably, this was caused by the processor getting too hot, and for the most prosaic of reasons: the processor cooling fan wasn’t working. Sometimes it was just strangled by dust. Or it might have just given up the ghost. But whatever the reason, fixing the fan fixed the crashing.

The strong conclusion from this was that it’s very important to keep processors (CPUs) within their safe operating limits. This type of crash was nasty. It would either freeze the computer completely, or it would chuck you out to a blue screen. Either way, it wasn’t nice for your project. And it was very stressful.

But fans aren’t the only reason why processors overheat. It can happen if they’re run too fast. At this stage, I need to say that I haven’t worked “inside” PCs for a very long time, so what I’m going to say now is more from what I’ve heard and read in comments to RedShark articles. But, again, some basic and long-lived rules apply.

Building a superfast PC

I totally get the temptation to build a PC and make it as fast as possible. There are some very clever and knowledgable people out there who understand every variant of every processor, and every little (and big!) tweak to make their CPUs go faster. This involves tuning any number of factors, but it seems that ram type and speed is incredibly important. There are combinations that work and some that don’t.

Sounds fairly straightforward, doesn’t it: if you need good PC, ask one of these experts.

Well, yes, that’s true. But it’s actually not as simple as that, because it really depends what your priorities are.

If you’re a gamer, or just love speed, you’ll use every trick you can glean from your friends and the wider PC community to configure your computer for rapidity. You’ll even be prepared to do battle with the law of diminishing returns, spending disproportionate sums for small increases in speed.

And you might do very well with this approach. But, ultimately, you are going to have to decide where to draw the line between speed and reliability. And that’s perfectly OK. No one would use a car powered by nitrous oxide to take the kids to school. Nor would you contemplate an engine tuned to within an inch of its life to power an ambulance - even though emergency vehicles would benefit from the speed.

High performance comes at a cost

The point is, of course, that highly tuned vehicles aren’t reliable. They burn themselves out quickly. In the motor racing world at the highest levels, it’s probably OK to have to replace the engine every two races or so. It’s OK because it gets the job done. The upside is worth the downside.

But away from racing, I’d argue that reliability is more important than speed. Even in a well managed facility, where there are backups and redundancy, it’s annoying and very often financially hazardous if your workstations keep crashing. I’d argue that most facility owners would step back from the highest possible speeds if it meant that their computers would be be crashing all the time (or even, occasionaly, in front of a client).

We recently carried a piece written by Matt Bach of Puget Systems. We normally avoid articles written by manufacturers, because they’re rarely impartial. But this one impressed us because of the amount of hard data in it. Despite this, some commenters wanted to argue with Puget’s methodology. But the commenters were seeing it through a lens of speed maximisation. Puget’s lens was reliability.

And it’s easy to see why reliability is so critical to a company that makes workstations. Not only does reliability mean that the company doesn’t spend all its time dealing with complaints about crashing, it also means that clients don’t spend inordinate amounts of time recovering after a disastrous and unseemly exit.

One final thing: when I worked at the computer company, I was amazed at customers’ willingness to work on projects with beta software. They’d wait for the split second that the software was released, and then switch to it, even with a half-completed project.

In my view, there is no “feature” in beta software that’s worth the risk of using unfinished products. Of course there will be bugs in all applications, but when something’s in beta, it might as well carry a label saying “this buggy software is full of bugs and will trash your project” Much as I dislike gratuitous latin phrases, Caveat Emptor.

Tags: Technology

Comments