Why are computers temperamental?

Why are computers temperamental?

Author
Discussion

Animal

Original Poster:

5,262 posts

269 months

Thursday 22nd May 2008
quotequote all
I'm not a technophobe, but I switch my laptop on and expect it to work and that's about it. Yes, I always try and install the latest software updates when alerted but I don't know .tif from .pdf and I really don't care.

However, what I do care about is the fact that I can't watch movies with Media Player without the sound and vision being completely out of sync. I do care that Media Player is not responding. I do care that my computer ceases to do what it's told and throws a fit because it's trying to play the Windows logoff sound.

I'm lucky that I've got a good mate that loves computers and he suggests 'rebuilding' my computer every six months or so, but my question is why? Why is this poxy thing so fragile? How do I make it better without throwing across the garden and frightening the neighbours?

mcflurry

9,104 posts

254 months

Thursday 22nd May 2008
quotequote all
Throwing the cat in among the pigeons - that's the good thing about macs - they "just work" smile

(I have a pc, linux machine and a mac)

trackcar

6,453 posts

227 months

Thursday 22nd May 2008
quotequote all
I've often wondered why a PC seems to have mood swings. surely if it's going through the same start-up sequence, doing the same mundane tasks day in day out (email, surfing, watching you tube etc) it should behave the same each and every day. but no. why?

cyberface

12,214 posts

258 months

Thursday 22nd May 2008
quotequote all
Mine aren't. And I have a lot of them.

'Temperamental' is merely a euphemism for 'defective'. A bit like the classic car 'character' argument, where a bit of anthropomorphism is added to the argument, the bottom line is that if a computer doesn't do what you instruct it to do, then either it's user error (i.e. you've given it faulty or contradictory instructions) or it's st software (i.e. the people who wrote the instructions in your software, whether operating system or application, wrote bad code or didn't test thoroughly). Or a combination of the two.

Hardware isn't easy to change when it's built so chip manufacturers tend to adopt a more 'engineering' approach to their products and, in general, computer hardware is pretty reliable to perform according to specification. However software can always be fixed with a patch... so with the rush to be first to market, software is generally of lower quality than hardware.

If you've ever dealt with programmers / developers you'll know why immediately. Yes, some are thorough engineers who deserve the title 'engineer'... but there are a LOT of cowboys out there who will sling out bug-ridden, inelegant, ste code. This, if not caught by diligent testers (something project sponsors tend to be reluctant to spend enough money / time on to ensure quality) ends up in the end product.

And, it does have to be said, the 'big dog' of the software world, Microsoft, has historically had a reputation for releasing 80% products - in other words, only 80% finished. People using Microsoft systems have therefore become 'used to' unreliability and accept it as 'typical' of computer systems. To be fair, this sure as hell isn't confined to Microsoft - all software houses put out unfinished code to get ahead of the competition from time to time.

We wouldn't put up with it in a car though. The old internet joke that went round about the 'Microsoft Car' and the 10 features (one being the engine would stop for no apparent reason, but you could coast to the side of the road and restart it and everything would be OK again...) are applicable to many others than Microsoft.

Computers *are* complicated devices, of course, which means there are many millions of dependencies and test cases, not all of which are feasible to test, but I'm not convinced that there is any other excuse for the general poor quality of software in general other than sloppiness. A lot of programmers are simply not 'engineers' when they need to be. Software controlling nuclear reactors or aircraft, for example, has to be reliable and in general is. But when even space agencies lose multimillion pound spacecraft due to software failure (to be fair, the Mars lander mix up with metric and imperial units wasn't the programmers' fault (though they should have spotted it) but incorrect requirements specifications, so the fault of the analysts) I can understand that people have the impression that computers are inherently 'temperamental' and sometimes go wrong for no reason whatsoever.

The reality is that there is *always* a reason, but MS have got enough people used to rebooting when things go wrong that it's accepted as part of the experience. It shouldn't be that way, but there you go. On the other side of the coin, if you can get your users to accept the 80% experience then you can progress faster, by moving on to new ideas and technology before you've perfected the old stuff. A lot of the high rate of progress can be attributed to this.

However I prefer my computers to 'just work' when I operate them according to the instructions, at the very least. Obviously you can break anything if you try, but standard cases should have been tested thoroughly. The main problem with Windows is that Microsoft don't have control over the hardware, so their software can be running on a combination of components that Microsoft simply can't be responsible for testing. A tenuous analogy would be like building a kit car, and then bunging in a Ford ECU, wiring it all up (let's assume the basics, it's a Ford 4 cylinder and the ECU is from a Ford 4 cylinder car, but not necessarily the same model or capacity) and expecting it to run without calibrating it.

I have no intention of this thread getting into yet another tedious platform argument so I'm not going to mention anything about Apple or Linux or Solaris or whatever. But the general problem can be put down to a lot of programmers not being 'engineers' in the traditional sense when they *need* to be, the people writing the specifications or doing the testing being slack, and the sheer number of combinations of hardware making thorough testing impossible. And this applies across the spectrum of platforms. Compare general-purpose computers with single-use embedded systems... they tend to be designed and programmed by proper engineers and all cases can be tested (e.g. a network switch or router) and as a result tend to 'just work' without the random behaviour of some unfortunate desktop installations.

That said, there's really no excuse for laptops - the manufacturer should choose components that work with the operating system chosen, and then thoroughly test that combination of hardware with the operating system sold with the unit...

Errrr bit of a rant there. Apologies smile

PJR

2,616 posts

213 months

Friday 23rd May 2008
quotequote all
mcflurry said:
Throwing the cat in among the pigeons - that's the good thing about macs - they "just work" smile

(I have a pc, linux machine and a mac)
You've gone and done it now.. You know that don't you! hehe

P,

FlossyThePig

4,085 posts

244 months

Friday 23rd May 2008
quotequote all
mcflurry said:
Throwing the cat in among the pigeons - that's the good thing about macs - they "just work" smile

(I have a pc, linux machine and a mac)
Macs work because they are a closed box. As there is a small permutation of components it is easier to get them talking to each other.

PCs are open so there are many permutations in hardware and sofware combinations. Software drivers for various components should get over the problem but the number of possible combinations of hardware options can lead to not being able to test all. Problems can only be solved when they can be identified.

Back to the original problem. The issue may be down to driver problems. The simple answer is to ensure you have the latest up to date drivers but getting and installing them is not made easy by manufacturers who assume all PC owners are techies.

Ash 996 GT2

3,836 posts

242 months

Friday 23rd May 2008
quotequote all
cyberface said:
Mine aren't. And I have a lot of them.

'Temperamental' is merely a euphemism for 'defective'. A bit like the classic car 'character' argument, where a bit of anthropomorphism is added to the argument, the bottom line is that if a computer doesn't do what you instruct it to do, then either it's user error (i.e. you've given it faulty or contradictory instructions) or it's st software (i.e. the people who wrote the instructions in your software, whether operating system or application, wrote bad code or didn't test thoroughly). Or a combination of the two.

Hardware isn't easy to change when it's built so chip manufacturers tend to adopt a more 'engineering' approach to their products and, in general, computer hardware is pretty reliable to perform according to specification. However software can always be fixed with a patch... so with the rush to be first to market, software is generally of lower quality than hardware.

If you've ever dealt with programmers / developers you'll know why immediately. Yes, some are thorough engineers who deserve the title 'engineer'... but there are a LOT of cowboys out there who will sling out bug-ridden, inelegant, ste code. This, if not caught by diligent testers (something project sponsors tend to be reluctant to spend enough money / time on to ensure quality) ends up in the end product.

And, it does have to be said, the 'big dog' of the software world, Microsoft, has historically had a reputation for releasing 80% products - in other words, only 80% finished. People using Microsoft systems have therefore become 'used to' unreliability and accept it as 'typical' of computer systems. To be fair, this sure as hell isn't confined to Microsoft - all software houses put out unfinished code to get ahead of the competition from time to time.

We wouldn't put up with it in a car though. The old internet joke that went round about the 'Microsoft Car' and the 10 features (one being the engine would stop for no apparent reason, but you could coast to the side of the road and restart it and everything would be OK again...) are applicable to many others than Microsoft.

Computers *are* complicated devices, of course, which means there are many millions of dependencies and test cases, not all of which are feasible to test, but I'm not convinced that there is any other excuse for the general poor quality of software in general other than sloppiness. A lot of programmers are simply not 'engineers' when they need to be. Software controlling nuclear reactors or aircraft, for example, has to be reliable and in general is. But when even space agencies lose multimillion pound spacecraft due to software failure (to be fair, the Mars lander mix up with metric and imperial units wasn't the programmers' fault (though they should have spotted it) but incorrect requirements specifications, so the fault of the analysts) I can understand that people have the impression that computers are inherently 'temperamental' and sometimes go wrong for no reason whatsoever.

The reality is that there is *always* a reason, but MS have got enough people used to rebooting when things go wrong that it's accepted as part of the experience. It shouldn't be that way, but there you go. On the other side of the coin, if you can get your users to accept the 80% experience then you can progress faster, by moving on to new ideas and technology before you've perfected the old stuff. A lot of the high rate of progress can be attributed to this.

However I prefer my computers to 'just work' when I operate them according to the instructions, at the very least. Obviously you can break anything if you try, but standard cases should have been tested thoroughly. The main problem with Windows is that Microsoft don't have control over the hardware, so their software can be running on a combination of components that Microsoft simply can't be responsible for testing. A tenuous analogy would be like building a kit car, and then bunging in a Ford ECU, wiring it all up (let's assume the basics, it's a Ford 4 cylinder and the ECU is from a Ford 4 cylinder car, but not necessarily the same model or capacity) and expecting it to run without calibrating it.

I have no intention of this thread getting into yet another tedious platform argument so I'm not going to mention anything about Apple or Linux or Solaris or whatever. But the general problem can be put down to a lot of programmers not being 'engineers' in the traditional sense when they *need* to be, the people writing the specifications or doing the testing being slack, and the sheer number of combinations of hardware making thorough testing impossible. And this applies across the spectrum of platforms. Compare general-purpose computers with single-use embedded systems... they tend to be designed and programmed by proper engineers and all cases can be tested (e.g. a network switch or router) and as a result tend to 'just work' without the random behaviour of some unfortunate desktop installations.

That said, there's really no excuse for laptops - the manufacturer should choose components that work with the operating system chosen, and then thoroughly test that combination of hardware with the operating system sold with the unit...

Errrr bit of a rant there. Apologies smile
yesclap

spitfire-ian

3,848 posts

229 months

Friday 23rd May 2008
quotequote all
Ash 996 GT2 said:
cyberface said:
Lots of valid points
yesclap
+1 yes

anonymous-user

55 months

Friday 23rd May 2008
quotequote all
cyberface said:
If you've ever dealt with programmers / developers you'll know why immediately. Yes, some are thorough engineers who deserve the title 'engineer'... but there are a LOT of cowboys out there who will sling out bug-ridden, inelegant, ste code. This, if not caught by diligent testers (something project sponsors tend to be reluctant to spend enough money / time on to ensure quality) ends up in the end product.
I think that's perhaps a touch harsh. I know I release buggy code sometimes and I absolutely hate it. But when management put pressure on you to meet ridiculous timescales, it can be a way to buy yourself a little more time to fix the bugs in the QA phase. The problem being that by then the next bucket of censored has landed on your head. frown

Overall though, a very accurate description.

To the OP - Once you have a stable computer soon after setup and you have it performing the functions you want it to, stop playing. The stable systems Cyberface mentions - controlling nuclear reactors, etc. won't have the operators installing the latest WMP fix or a new IE toolbar every five minutes. The entire environment is controlled and all new software tested to ensure it doesn't affect anything else. A computer itself (ignoring the old Pentium floating point calculations) is deterministic. It will always give you the same answer to the same question. The problem is that as you use the computer, install new software, software bugs leave crap all over the computer, etc. the question changes and so does the answer.

ThatPhilBrettGuy

11,809 posts

241 months

Friday 23rd May 2008
quotequote all
LexSport said:
Once you have a stable computer soon after setup and you have it performing the functions you want it to, stop playing.
Yup, that's the way. It's no surprise (to some of us) that even the most buggy OS in the eyes of some run flawlessly for years. I've had Windows boxes that got rebooted once a year and that was due to UPS testing. 6 years in and not a crash.

Mac's do have the advantage of tightly controlled hardware. Open them up to all the $5 Ethernet and sound cards made in tin roofed shacks and see what happens to their uptime....

JonRB

74,862 posts

273 months

Friday 23rd May 2008
quotequote all
The other problem is that at a chip level, computers do what you tell them to do. Exactly what you tell them to. Not what you thought you told them to but what you actually told them to.

This is where bugs (or defects as we're meant to call them) come from - the difference between 'thought' and 'actually'.

Also, I think it's a bit naive to expect them to "just work". Systems are compilcated things - easily as complex as cars and yet we still don't have a car that never breaks down and never goes wrong.

Cyberface makes some very valid points too. It's all about time to market, so code just simply can't be tested as rigorously as it needs. It would take decades to excercise every code path in Vista, I reckon.

And, as has already been pointed out, the more software, drivers, widgets, thingumies and stuff you install on your machine, the less stable it is.
Some of the most stable systems around are servers with vanilla installs and with only essential patches manually installed on them, rather than Windows Update hoovering up whatever crap Microsoft sends out and without users installing extra crap on them all day long.

JonRB

74,862 posts

273 months

Friday 23rd May 2008
quotequote all
Animal said:
Yes, I always try and install the latest software updates when alerted but I don't know .tif from .pdf and I really don't care.
Well, there's your problem then. Why install the "latest and greatest" when you have no need. Does it address a defect that is manifesting on your system? Does it provide extra functionality that you need? If not, then why install it?

Users, eh? (joke)

Edited by JonRB on Friday 23 May 11:16

big_treacle

1,727 posts

261 months

Friday 23rd May 2008
quotequote all
Whenever I get a mate turn up saying, 'can you look at this laptop man? It's running really badly.. might get a new one.. I'm sure it used to be fine...' I fix the thing up & the problems are almost always because of a mixture of:

- Running Norton anti-everything software on some sort of check everything in realtime mode.
- Having loads of applications & services in the background they don't need. Every time they install some software, it's set to run at startup. Or check for updates from time to time (ie. run something in the background all the time checking this). Everytime they install something, they agree to install 'x' toobar or other extra piece of software.
- Never doing any maintenance or organisation.
- Trying to update some software or driver & making a mess of it.
- Having no understanding of PC's beyond how to switch on & off, how to start applications & how to download stuff which compounds the other points.

Obviously you do also get software with bugs (impossible to test something to 100% before release) or occasionally some sort of severe hardware/software incompatibility which I think causes more issues on PC's due to their flexible hardware/software nature.

Jinx

11,407 posts

261 months

Friday 23rd May 2008
quotequote all
The hardware isn't always reliable...

http://en.wikipedia.org/wiki/Pentium_FDIV_bug

hehe

jimmyjimjim

7,354 posts

239 months

Friday 23rd May 2008
quotequote all
cyberface said:
The main problem with Windows is that Microsoft don't have control over the hardware, so their software can be running on a combination of components that Microsoft simply can't be responsible for testing. A tenuous analogy would be like building a kit car, and then bunging in a Ford ECU, wiring it all up (let's assume the basics, it's a Ford 4 cylinder and the ECU is from a Ford 4 cylinder car, but not necessarily the same model or capacity) and expecting it to run without calibrating it.

I have no intention of this thread getting into yet another tedious platform argument so I'm not going to mention anything about Apple or Linux or Solaris or whatever.
I think that's put Microsoft's position very well. There was something the other day about the distribution of the cause of crashes on Vista; IIRC 80%+ were down to 3rd party drivers.

I'd love (no, not at all, really because I think it would be bad) to see Apple run OSX (or 11, or 12, or whatever) on a similar hardware base (i.e. enormously varied), just to see if they experienced a similar %age. It would be interesting to see that stats, and number of issues, just to see how they compare. It'll never happen of course, because Apple have enough sense not to; it would destroy the 'just works' image.

But it would also be interesting to see who got blamed; Apple, or the hardware people - perception is important.

As with cyberface, this isn't another platform argument, I'd be genuinely interested.

MrTom

868 posts

204 months

Friday 23rd May 2008
quotequote all
the daft thing is that if your system is stable or not the os makers still recommend you update.
In regard to the microsoft 80% reliable software, how come most linux distro just worked out the box. Does it have better error handling.

dilbert

7,741 posts

232 months

Friday 23rd May 2008
quotequote all
Animal said:
I'm not a technophobe, but I switch my laptop on and expect it to work and that's about it. Yes, I always try and install the latest software updates when alerted but I don't know .tif from .pdf and I really don't care.

However, what I do care about is the fact that I can't watch movies with Media Player without the sound and vision being completely out of sync. I do care that Media Player is not responding. I do care that my computer ceases to do what it's told and throws a fit because it's trying to play the Windows logoff sound.

I'm lucky that I've got a good mate that loves computers and he suggests 'rebuilding' my computer every six months or so, but my question is why? Why is this poxy thing so fragile? How do I make it better without throwing across the garden and frightening the neighbours?
The hardware isn't unreliable. It's the software.

Perhaps you should think of software like introducing ideas to an totally perfect automaton, which (although there are sometimes faults) your computer actually is.

If you introduce conflicting ideas, and worse try to realise those ideas at the same time, you will always have problems. If you don't understand the ideas, don't expect them to work with each other, unless you are capable of resolving the conflicts.

Mactac

645 posts

194 months

Friday 23rd May 2008
quotequote all
[quote=mcflurry]Throwing the cat in among the pigeons - that's the good thing about macs - they "just work" smile

(used) Mac mini for over a year now,
switch it on & off like a lightbulb works every time.

for goodness sake come and join us and stop whitling!


JonRB

74,862 posts

273 months

Friday 23rd May 2008
quotequote all
Mactac said:
(used) Mac mini for over a year now,
switch it on & off like a lightbulb works every time.

for goodness sake come and join us and stop whitling!
Yeah, but reminds me of the old joke
Q. How many Macs does it take to change a light bulb?
A. You don't need to change the lightbulb, so we have not provided a way for you to do so.

Sure, you can make a system stable if you have it so locked down you have no choice over hardware or software. But where is the fun in that? It's the automotive equivalent of a Daewoo.

Edited by JonRB on Friday 23 May 22:21

Mactac

645 posts

194 months

Friday 23rd May 2008
quotequote all
No I'll go with usability for now.

the fun of resets or conflicting component manufacture,
coupled with settling down with you latest Grisham because the USB card is bksed
again I reckon I can manage without