Why are computers temperamental?
Discussion
I'm not a technophobe, but I switch my laptop on and expect it to work and that's about it. Yes, I always try and install the latest software updates when alerted but I don't know .tif from .pdf and I really don't care.
However, what I do care about is the fact that I can't watch movies with Media Player without the sound and vision being completely out of sync. I do care that Media Player is not responding. I do care that my computer ceases to do what it's told and throws a fit because it's trying to play the Windows logoff sound.
I'm lucky that I've got a good mate that loves computers and he suggests 'rebuilding' my computer every six months or so, but my question is why? Why is this poxy thing so fragile? How do I make it better without throwing across the garden and frightening the neighbours?
However, what I do care about is the fact that I can't watch movies with Media Player without the sound and vision being completely out of sync. I do care that Media Player is not responding. I do care that my computer ceases to do what it's told and throws a fit because it's trying to play the Windows logoff sound.
I'm lucky that I've got a good mate that loves computers and he suggests 'rebuilding' my computer every six months or so, but my question is why? Why is this poxy thing so fragile? How do I make it better without throwing across the garden and frightening the neighbours?
Mine aren't. And I have a lot of them.
'Temperamental' is merely a euphemism for 'defective'. A bit like the classic car 'character' argument, where a bit of anthropomorphism is added to the argument, the bottom line is that if a computer doesn't do what you instruct it to do, then either it's user error (i.e. you've given it faulty or contradictory instructions) or it's st software (i.e. the people who wrote the instructions in your software, whether operating system or application, wrote bad code or didn't test thoroughly). Or a combination of the two.
Hardware isn't easy to change when it's built so chip manufacturers tend to adopt a more 'engineering' approach to their products and, in general, computer hardware is pretty reliable to perform according to specification. However software can always be fixed with a patch... so with the rush to be first to market, software is generally of lower quality than hardware.
If you've ever dealt with programmers / developers you'll know why immediately. Yes, some are thorough engineers who deserve the title 'engineer'... but there are a LOT of cowboys out there who will sling out bug-ridden, inelegant, ste code. This, if not caught by diligent testers (something project sponsors tend to be reluctant to spend enough money / time on to ensure quality) ends up in the end product.
And, it does have to be said, the 'big dog' of the software world, Microsoft, has historically had a reputation for releasing 80% products - in other words, only 80% finished. People using Microsoft systems have therefore become 'used to' unreliability and accept it as 'typical' of computer systems. To be fair, this sure as hell isn't confined to Microsoft - all software houses put out unfinished code to get ahead of the competition from time to time.
We wouldn't put up with it in a car though. The old internet joke that went round about the 'Microsoft Car' and the 10 features (one being the engine would stop for no apparent reason, but you could coast to the side of the road and restart it and everything would be OK again...) are applicable to many others than Microsoft.
Computers *are* complicated devices, of course, which means there are many millions of dependencies and test cases, not all of which are feasible to test, but I'm not convinced that there is any other excuse for the general poor quality of software in general other than sloppiness. A lot of programmers are simply not 'engineers' when they need to be. Software controlling nuclear reactors or aircraft, for example, has to be reliable and in general is. But when even space agencies lose multimillion pound spacecraft due to software failure (to be fair, the Mars lander mix up with metric and imperial units wasn't the programmers' fault (though they should have spotted it) but incorrect requirements specifications, so the fault of the analysts) I can understand that people have the impression that computers are inherently 'temperamental' and sometimes go wrong for no reason whatsoever.
The reality is that there is *always* a reason, but MS have got enough people used to rebooting when things go wrong that it's accepted as part of the experience. It shouldn't be that way, but there you go. On the other side of the coin, if you can get your users to accept the 80% experience then you can progress faster, by moving on to new ideas and technology before you've perfected the old stuff. A lot of the high rate of progress can be attributed to this.
However I prefer my computers to 'just work' when I operate them according to the instructions, at the very least. Obviously you can break anything if you try, but standard cases should have been tested thoroughly. The main problem with Windows is that Microsoft don't have control over the hardware, so their software can be running on a combination of components that Microsoft simply can't be responsible for testing. A tenuous analogy would be like building a kit car, and then bunging in a Ford ECU, wiring it all up (let's assume the basics, it's a Ford 4 cylinder and the ECU is from a Ford 4 cylinder car, but not necessarily the same model or capacity) and expecting it to run without calibrating it.
I have no intention of this thread getting into yet another tedious platform argument so I'm not going to mention anything about Apple or Linux or Solaris or whatever. But the general problem can be put down to a lot of programmers not being 'engineers' in the traditional sense when they *need* to be, the people writing the specifications or doing the testing being slack, and the sheer number of combinations of hardware making thorough testing impossible. And this applies across the spectrum of platforms. Compare general-purpose computers with single-use embedded systems... they tend to be designed and programmed by proper engineers and all cases can be tested (e.g. a network switch or router) and as a result tend to 'just work' without the random behaviour of some unfortunate desktop installations.
That said, there's really no excuse for laptops - the manufacturer should choose components that work with the operating system chosen, and then thoroughly test that combination of hardware with the operating system sold with the unit...
Errrr bit of a rant there. Apologies
'Temperamental' is merely a euphemism for 'defective'. A bit like the classic car 'character' argument, where a bit of anthropomorphism is added to the argument, the bottom line is that if a computer doesn't do what you instruct it to do, then either it's user error (i.e. you've given it faulty or contradictory instructions) or it's st software (i.e. the people who wrote the instructions in your software, whether operating system or application, wrote bad code or didn't test thoroughly). Or a combination of the two.
Hardware isn't easy to change when it's built so chip manufacturers tend to adopt a more 'engineering' approach to their products and, in general, computer hardware is pretty reliable to perform according to specification. However software can always be fixed with a patch... so with the rush to be first to market, software is generally of lower quality than hardware.
If you've ever dealt with programmers / developers you'll know why immediately. Yes, some are thorough engineers who deserve the title 'engineer'... but there are a LOT of cowboys out there who will sling out bug-ridden, inelegant, ste code. This, if not caught by diligent testers (something project sponsors tend to be reluctant to spend enough money / time on to ensure quality) ends up in the end product.
And, it does have to be said, the 'big dog' of the software world, Microsoft, has historically had a reputation for releasing 80% products - in other words, only 80% finished. People using Microsoft systems have therefore become 'used to' unreliability and accept it as 'typical' of computer systems. To be fair, this sure as hell isn't confined to Microsoft - all software houses put out unfinished code to get ahead of the competition from time to time.
We wouldn't put up with it in a car though. The old internet joke that went round about the 'Microsoft Car' and the 10 features (one being the engine would stop for no apparent reason, but you could coast to the side of the road and restart it and everything would be OK again...) are applicable to many others than Microsoft.
Computers *are* complicated devices, of course, which means there are many millions of dependencies and test cases, not all of which are feasible to test, but I'm not convinced that there is any other excuse for the general poor quality of software in general other than sloppiness. A lot of programmers are simply not 'engineers' when they need to be. Software controlling nuclear reactors or aircraft, for example, has to be reliable and in general is. But when even space agencies lose multimillion pound spacecraft due to software failure (to be fair, the Mars lander mix up with metric and imperial units wasn't the programmers' fault (though they should have spotted it) but incorrect requirements specifications, so the fault of the analysts) I can understand that people have the impression that computers are inherently 'temperamental' and sometimes go wrong for no reason whatsoever.
The reality is that there is *always* a reason, but MS have got enough people used to rebooting when things go wrong that it's accepted as part of the experience. It shouldn't be that way, but there you go. On the other side of the coin, if you can get your users to accept the 80% experience then you can progress faster, by moving on to new ideas and technology before you've perfected the old stuff. A lot of the high rate of progress can be attributed to this.
However I prefer my computers to 'just work' when I operate them according to the instructions, at the very least. Obviously you can break anything if you try, but standard cases should have been tested thoroughly. The main problem with Windows is that Microsoft don't have control over the hardware, so their software can be running on a combination of components that Microsoft simply can't be responsible for testing. A tenuous analogy would be like building a kit car, and then bunging in a Ford ECU, wiring it all up (let's assume the basics, it's a Ford 4 cylinder and the ECU is from a Ford 4 cylinder car, but not necessarily the same model or capacity) and expecting it to run without calibrating it.
I have no intention of this thread getting into yet another tedious platform argument so I'm not going to mention anything about Apple or Linux or Solaris or whatever. But the general problem can be put down to a lot of programmers not being 'engineers' in the traditional sense when they *need* to be, the people writing the specifications or doing the testing being slack, and the sheer number of combinations of hardware making thorough testing impossible. And this applies across the spectrum of platforms. Compare general-purpose computers with single-use embedded systems... they tend to be designed and programmed by proper engineers and all cases can be tested (e.g. a network switch or router) and as a result tend to 'just work' without the random behaviour of some unfortunate desktop installations.
That said, there's really no excuse for laptops - the manufacturer should choose components that work with the operating system chosen, and then thoroughly test that combination of hardware with the operating system sold with the unit...
Errrr bit of a rant there. Apologies
mcflurry said:
Throwing the cat in among the pigeons - that's the good thing about macs - they "just work"
(I have a pc, linux machine and a mac)
Macs work because they are a closed box. As there is a small permutation of components it is easier to get them talking to each other.(I have a pc, linux machine and a mac)
PCs are open so there are many permutations in hardware and sofware combinations. Software drivers for various components should get over the problem but the number of possible combinations of hardware options can lead to not being able to test all. Problems can only be solved when they can be identified.
Back to the original problem. The issue may be down to driver problems. The simple answer is to ensure you have the latest up to date drivers but getting and installing them is not made easy by manufacturers who assume all PC owners are techies.
cyberface said:
Mine aren't. And I have a lot of them.
'Temperamental' is merely a euphemism for 'defective'. A bit like the classic car 'character' argument, where a bit of anthropomorphism is added to the argument, the bottom line is that if a computer doesn't do what you instruct it to do, then either it's user error (i.e. you've given it faulty or contradictory instructions) or it's st software (i.e. the people who wrote the instructions in your software, whether operating system or application, wrote bad code or didn't test thoroughly). Or a combination of the two.
Hardware isn't easy to change when it's built so chip manufacturers tend to adopt a more 'engineering' approach to their products and, in general, computer hardware is pretty reliable to perform according to specification. However software can always be fixed with a patch... so with the rush to be first to market, software is generally of lower quality than hardware.
If you've ever dealt with programmers / developers you'll know why immediately. Yes, some are thorough engineers who deserve the title 'engineer'... but there are a LOT of cowboys out there who will sling out bug-ridden, inelegant, ste code. This, if not caught by diligent testers (something project sponsors tend to be reluctant to spend enough money / time on to ensure quality) ends up in the end product.
And, it does have to be said, the 'big dog' of the software world, Microsoft, has historically had a reputation for releasing 80% products - in other words, only 80% finished. People using Microsoft systems have therefore become 'used to' unreliability and accept it as 'typical' of computer systems. To be fair, this sure as hell isn't confined to Microsoft - all software houses put out unfinished code to get ahead of the competition from time to time.
We wouldn't put up with it in a car though. The old internet joke that went round about the 'Microsoft Car' and the 10 features (one being the engine would stop for no apparent reason, but you could coast to the side of the road and restart it and everything would be OK again...) are applicable to many others than Microsoft.
Computers *are* complicated devices, of course, which means there are many millions of dependencies and test cases, not all of which are feasible to test, but I'm not convinced that there is any other excuse for the general poor quality of software in general other than sloppiness. A lot of programmers are simply not 'engineers' when they need to be. Software controlling nuclear reactors or aircraft, for example, has to be reliable and in general is. But when even space agencies lose multimillion pound spacecraft due to software failure (to be fair, the Mars lander mix up with metric and imperial units wasn't the programmers' fault (though they should have spotted it) but incorrect requirements specifications, so the fault of the analysts) I can understand that people have the impression that computers are inherently 'temperamental' and sometimes go wrong for no reason whatsoever.
The reality is that there is *always* a reason, but MS have got enough people used to rebooting when things go wrong that it's accepted as part of the experience. It shouldn't be that way, but there you go. On the other side of the coin, if you can get your users to accept the 80% experience then you can progress faster, by moving on to new ideas and technology before you've perfected the old stuff. A lot of the high rate of progress can be attributed to this.
However I prefer my computers to 'just work' when I operate them according to the instructions, at the very least. Obviously you can break anything if you try, but standard cases should have been tested thoroughly. The main problem with Windows is that Microsoft don't have control over the hardware, so their software can be running on a combination of components that Microsoft simply can't be responsible for testing. A tenuous analogy would be like building a kit car, and then bunging in a Ford ECU, wiring it all up (let's assume the basics, it's a Ford 4 cylinder and the ECU is from a Ford 4 cylinder car, but not necessarily the same model or capacity) and expecting it to run without calibrating it.
I have no intention of this thread getting into yet another tedious platform argument so I'm not going to mention anything about Apple or Linux or Solaris or whatever. But the general problem can be put down to a lot of programmers not being 'engineers' in the traditional sense when they *need* to be, the people writing the specifications or doing the testing being slack, and the sheer number of combinations of hardware making thorough testing impossible. And this applies across the spectrum of platforms. Compare general-purpose computers with single-use embedded systems... they tend to be designed and programmed by proper engineers and all cases can be tested (e.g. a network switch or router) and as a result tend to 'just work' without the random behaviour of some unfortunate desktop installations.
That said, there's really no excuse for laptops - the manufacturer should choose components that work with the operating system chosen, and then thoroughly test that combination of hardware with the operating system sold with the unit...
Errrr bit of a rant there. Apologies
'Temperamental' is merely a euphemism for 'defective'. A bit like the classic car 'character' argument, where a bit of anthropomorphism is added to the argument, the bottom line is that if a computer doesn't do what you instruct it to do, then either it's user error (i.e. you've given it faulty or contradictory instructions) or it's st software (i.e. the people who wrote the instructions in your software, whether operating system or application, wrote bad code or didn't test thoroughly). Or a combination of the two.
Hardware isn't easy to change when it's built so chip manufacturers tend to adopt a more 'engineering' approach to their products and, in general, computer hardware is pretty reliable to perform according to specification. However software can always be fixed with a patch... so with the rush to be first to market, software is generally of lower quality than hardware.
If you've ever dealt with programmers / developers you'll know why immediately. Yes, some are thorough engineers who deserve the title 'engineer'... but there are a LOT of cowboys out there who will sling out bug-ridden, inelegant, ste code. This, if not caught by diligent testers (something project sponsors tend to be reluctant to spend enough money / time on to ensure quality) ends up in the end product.
And, it does have to be said, the 'big dog' of the software world, Microsoft, has historically had a reputation for releasing 80% products - in other words, only 80% finished. People using Microsoft systems have therefore become 'used to' unreliability and accept it as 'typical' of computer systems. To be fair, this sure as hell isn't confined to Microsoft - all software houses put out unfinished code to get ahead of the competition from time to time.
We wouldn't put up with it in a car though. The old internet joke that went round about the 'Microsoft Car' and the 10 features (one being the engine would stop for no apparent reason, but you could coast to the side of the road and restart it and everything would be OK again...) are applicable to many others than Microsoft.
Computers *are* complicated devices, of course, which means there are many millions of dependencies and test cases, not all of which are feasible to test, but I'm not convinced that there is any other excuse for the general poor quality of software in general other than sloppiness. A lot of programmers are simply not 'engineers' when they need to be. Software controlling nuclear reactors or aircraft, for example, has to be reliable and in general is. But when even space agencies lose multimillion pound spacecraft due to software failure (to be fair, the Mars lander mix up with metric and imperial units wasn't the programmers' fault (though they should have spotted it) but incorrect requirements specifications, so the fault of the analysts) I can understand that people have the impression that computers are inherently 'temperamental' and sometimes go wrong for no reason whatsoever.
The reality is that there is *always* a reason, but MS have got enough people used to rebooting when things go wrong that it's accepted as part of the experience. It shouldn't be that way, but there you go. On the other side of the coin, if you can get your users to accept the 80% experience then you can progress faster, by moving on to new ideas and technology before you've perfected the old stuff. A lot of the high rate of progress can be attributed to this.
However I prefer my computers to 'just work' when I operate them according to the instructions, at the very least. Obviously you can break anything if you try, but standard cases should have been tested thoroughly. The main problem with Windows is that Microsoft don't have control over the hardware, so their software can be running on a combination of components that Microsoft simply can't be responsible for testing. A tenuous analogy would be like building a kit car, and then bunging in a Ford ECU, wiring it all up (let's assume the basics, it's a Ford 4 cylinder and the ECU is from a Ford 4 cylinder car, but not necessarily the same model or capacity) and expecting it to run without calibrating it.
I have no intention of this thread getting into yet another tedious platform argument so I'm not going to mention anything about Apple or Linux or Solaris or whatever. But the general problem can be put down to a lot of programmers not being 'engineers' in the traditional sense when they *need* to be, the people writing the specifications or doing the testing being slack, and the sheer number of combinations of hardware making thorough testing impossible. And this applies across the spectrum of platforms. Compare general-purpose computers with single-use embedded systems... they tend to be designed and programmed by proper engineers and all cases can be tested (e.g. a network switch or router) and as a result tend to 'just work' without the random behaviour of some unfortunate desktop installations.
That said, there's really no excuse for laptops - the manufacturer should choose components that work with the operating system chosen, and then thoroughly test that combination of hardware with the operating system sold with the unit...
Errrr bit of a rant there. Apologies
cyberface said:
If you've ever dealt with programmers / developers you'll know why immediately. Yes, some are thorough engineers who deserve the title 'engineer'... but there are a LOT of cowboys out there who will sling out bug-ridden, inelegant, ste code. This, if not caught by diligent testers (something project sponsors tend to be reluctant to spend enough money / time on to ensure quality) ends up in the end product.
I think that's perhaps a touch harsh. I know I release buggy code sometimes and I absolutely hate it. But when management put pressure on you to meet ridiculous timescales, it can be a way to buy yourself a little more time to fix the bugs in the QA phase. The problem being that by then the next bucket of has landed on your head. Overall though, a very accurate description.
To the OP - Once you have a stable computer soon after setup and you have it performing the functions you want it to, stop playing. The stable systems Cyberface mentions - controlling nuclear reactors, etc. won't have the operators installing the latest WMP fix or a new IE toolbar every five minutes. The entire environment is controlled and all new software tested to ensure it doesn't affect anything else. A computer itself (ignoring the old Pentium floating point calculations) is deterministic. It will always give you the same answer to the same question. The problem is that as you use the computer, install new software, software bugs leave crap all over the computer, etc. the question changes and so does the answer.
LexSport said:
Once you have a stable computer soon after setup and you have it performing the functions you want it to, stop playing.
Yup, that's the way. It's no surprise (to some of us) that even the most buggy OS in the eyes of some run flawlessly for years. I've had Windows boxes that got rebooted once a year and that was due to UPS testing. 6 years in and not a crash.Mac's do have the advantage of tightly controlled hardware. Open them up to all the $5 Ethernet and sound cards made in tin roofed shacks and see what happens to their uptime....
The other problem is that at a chip level, computers do what you tell them to do. Exactly what you tell them to. Not what you thought you told them to but what you actually told them to.
This is where bugs (or defects as we're meant to call them) come from - the difference between 'thought' and 'actually'.
Also, I think it's a bit naive to expect them to "just work". Systems are compilcated things - easily as complex as cars and yet we still don't have a car that never breaks down and never goes wrong.
Cyberface makes some very valid points too. It's all about time to market, so code just simply can't be tested as rigorously as it needs. It would take decades to excercise every code path in Vista, I reckon.
And, as has already been pointed out, the more software, drivers, widgets, thingumies and stuff you install on your machine, the less stable it is.
Some of the most stable systems around are servers with vanilla installs and with only essential patches manually installed on them, rather than Windows Update hoovering up whatever crap Microsoft sends out and without users installing extra crap on them all day long.
This is where bugs (or defects as we're meant to call them) come from - the difference between 'thought' and 'actually'.
Also, I think it's a bit naive to expect them to "just work". Systems are compilcated things - easily as complex as cars and yet we still don't have a car that never breaks down and never goes wrong.
Cyberface makes some very valid points too. It's all about time to market, so code just simply can't be tested as rigorously as it needs. It would take decades to excercise every code path in Vista, I reckon.
And, as has already been pointed out, the more software, drivers, widgets, thingumies and stuff you install on your machine, the less stable it is.
Some of the most stable systems around are servers with vanilla installs and with only essential patches manually installed on them, rather than Windows Update hoovering up whatever crap Microsoft sends out and without users installing extra crap on them all day long.
Animal said:
Yes, I always try and install the latest software updates when alerted but I don't know .tif from .pdf and I really don't care.
Well, there's your problem then. Why install the "latest and greatest" when you have no need. Does it address a defect that is manifesting on your system? Does it provide extra functionality that you need? If not, then why install it? Users, eh? (joke)
Edited by JonRB on Friday 23 May 11:16
Whenever I get a mate turn up saying, 'can you look at this laptop man? It's running really badly.. might get a new one.. I'm sure it used to be fine...' I fix the thing up & the problems are almost always because of a mixture of:
- Running Norton anti-everything software on some sort of check everything in realtime mode.
- Having loads of applications & services in the background they don't need. Every time they install some software, it's set to run at startup. Or check for updates from time to time (ie. run something in the background all the time checking this). Everytime they install something, they agree to install 'x' toobar or other extra piece of software.
- Never doing any maintenance or organisation.
- Trying to update some software or driver & making a mess of it.
- Having no understanding of PC's beyond how to switch on & off, how to start applications & how to download stuff which compounds the other points.
Obviously you do also get software with bugs (impossible to test something to 100% before release) or occasionally some sort of severe hardware/software incompatibility which I think causes more issues on PC's due to their flexible hardware/software nature.
- Running Norton anti-everything software on some sort of check everything in realtime mode.
- Having loads of applications & services in the background they don't need. Every time they install some software, it's set to run at startup. Or check for updates from time to time (ie. run something in the background all the time checking this). Everytime they install something, they agree to install 'x' toobar or other extra piece of software.
- Never doing any maintenance or organisation.
- Trying to update some software or driver & making a mess of it.
- Having no understanding of PC's beyond how to switch on & off, how to start applications & how to download stuff which compounds the other points.
Obviously you do also get software with bugs (impossible to test something to 100% before release) or occasionally some sort of severe hardware/software incompatibility which I think causes more issues on PC's due to their flexible hardware/software nature.
cyberface said:
The main problem with Windows is that Microsoft don't have control over the hardware, so their software can be running on a combination of components that Microsoft simply can't be responsible for testing. A tenuous analogy would be like building a kit car, and then bunging in a Ford ECU, wiring it all up (let's assume the basics, it's a Ford 4 cylinder and the ECU is from a Ford 4 cylinder car, but not necessarily the same model or capacity) and expecting it to run without calibrating it.
I have no intention of this thread getting into yet another tedious platform argument so I'm not going to mention anything about Apple or Linux or Solaris or whatever.
I think that's put Microsoft's position very well. There was something the other day about the distribution of the cause of crashes on Vista; IIRC 80%+ were down to 3rd party drivers.I have no intention of this thread getting into yet another tedious platform argument so I'm not going to mention anything about Apple or Linux or Solaris or whatever.
I'd love (no, not at all, really because I think it would be bad) to see Apple run OSX (or 11, or 12, or whatever) on a similar hardware base (i.e. enormously varied), just to see if they experienced a similar %age. It would be interesting to see that stats, and number of issues, just to see how they compare. It'll never happen of course, because Apple have enough sense not to; it would destroy the 'just works' image.
But it would also be interesting to see who got blamed; Apple, or the hardware people - perception is important.
As with cyberface, this isn't another platform argument, I'd be genuinely interested.
Animal said:
I'm not a technophobe, but I switch my laptop on and expect it to work and that's about it. Yes, I always try and install the latest software updates when alerted but I don't know .tif from .pdf and I really don't care.
However, what I do care about is the fact that I can't watch movies with Media Player without the sound and vision being completely out of sync. I do care that Media Player is not responding. I do care that my computer ceases to do what it's told and throws a fit because it's trying to play the Windows logoff sound.
I'm lucky that I've got a good mate that loves computers and he suggests 'rebuilding' my computer every six months or so, but my question is why? Why is this poxy thing so fragile? How do I make it better without throwing across the garden and frightening the neighbours?
The hardware isn't unreliable. It's the software. However, what I do care about is the fact that I can't watch movies with Media Player without the sound and vision being completely out of sync. I do care that Media Player is not responding. I do care that my computer ceases to do what it's told and throws a fit because it's trying to play the Windows logoff sound.
I'm lucky that I've got a good mate that loves computers and he suggests 'rebuilding' my computer every six months or so, but my question is why? Why is this poxy thing so fragile? How do I make it better without throwing across the garden and frightening the neighbours?
Perhaps you should think of software like introducing ideas to an totally perfect automaton, which (although there are sometimes faults) your computer actually is.
If you introduce conflicting ideas, and worse try to realise those ideas at the same time, you will always have problems. If you don't understand the ideas, don't expect them to work with each other, unless you are capable of resolving the conflicts.
Mactac said:
(used) Mac mini for over a year now,
switch it on & off like a lightbulb works every time.
for goodness sake come and join us and stop whitling!
Yeah, but reminds me of the old jokeswitch it on & off like a lightbulb works every time.
for goodness sake come and join us and stop whitling!
Q. How many Macs does it take to change a light bulb?
A. You don't need to change the lightbulb, so we have not provided a way for you to do so.
Sure, you can make a system stable if you have it so locked down you have no choice over hardware or software. But where is the fun in that? It's the automotive equivalent of a Daewoo.
Edited by JonRB on Friday 23 May 22:21
Gassing Station | Computers, Gadgets & Stuff | Top of Page | What's New | My Stuff