BA systems down globally
Discussion
Wayne E Edge said:
Hope it isn't going to affect my flight to Paris on Monday. BA app is down.
Without wanting to be a doom monger there is significant weather forecast for tomorrow afternoon and possibly Monday which may add to the misery.You could just play it safe and book a flight with another airline. It's entirely plausible that ba will offer passengers booked over the next few days a full refund in order to free up some seats for those disrupted, I believe there's precedent here.
djc206 said:
Wayne E Edge said:
Hope it isn't going to affect my flight to Paris on Monday. BA app is down.
Without wanting to be a doom monger there is significant weather forecast for tomorrow afternoon and possibly Monday which may add to the misery.You could just play it safe and book a flight with another airline. It's entirely plausible that ba will offer passengers booked over the next few days a full refund in order to free up some seats for those disrupted, I believe there's precedent here.
Cold said:
BA are citing a "power supply issue" as the reason behind the IT systems failure.
If that's the case then some senior mgmt definitely need to be booted out. Incompetence extraordinaire. Vaud said:
Outsourcing in itself is not bad.
Outsourcing and not using some savings to invest in systems resilience and modernisation is bad.
I don't see companies the size of BA being able to make material and genuine savings by outsourcing. They are of such a scale that economies of scale for the vendor cease to be applicable. Outsourcing and not using some savings to invest in systems resilience and modernisation is bad.
Any savings are generally either imaginary, accounting trickery or a reduction in service (either deliberately or because those doing the outsourcing had no clue what to ask for nor any ability in negotiating). Or better still and more likely would be a combo of all the above.
The hokey cokey of IT service provision is as cyclical as it is because people realise this eventually. However thankfully it will never stop
justinio said:
They claimed it was £(cant remember how many)millions by outsourcing IT to TCS.
It hasnt gone down too well and morale is (as expected) low.
I'm sure they claimed plenty. And that the people signing the deal will have moved on to some other top job on the back of said claim before any actual saving has had any chance of materialising and/or the downsides hit home It hasnt gone down too well and morale is (as expected) low.
Murph7355 said:
Cold said:
BA are citing a "power supply issue" as the reason behind the IT systems failure.
If that's the case then some senior mgmt definitely need to be booted out. Incompetence extraordinaire. GT03ROB said:
Quite. Take out a single data centre & the whole company goes down? Simply unbelievable for a company the size of BA that is reliant on it's systems.
No date on it, but apparently they have 6 DCshttps://www.ait-pg.co.uk/our-work/british-airways/
Vaud said:
GT03ROB said:
Quite. Take out a single data centre & the whole company goes down? Simply unbelievable for a company the size of BA that is reliant on it's systems.
No date on it, but apparently they have 6 DCshttps://www.ait-pg.co.uk/our-work/british-airways/
Power outages in data centres,,,,
So much can go wrong, systems may be designed to deal with an outage, uninterruptible power supplies and diesels etc to give system resiliance, this all fits in with how the data centre is desgined. Some skimp and only UPS the IT equipment, depending on the diesels kicking in to supply all the non critical equipment, things like air handling units chillers and chilled water circuits, things that dont need to be supported for 10 minutes or so, it cuts the cost of the UPS system - you only need to support say 2MW of IT infrastructure of a total site load of say 4 MW. trouble is your generators fail then you have no cooling etc and that impacts the running of the IT side of things.
The resilience of the system could have a weak point and its just unfortunate that its been uncovered today. Its a saturday aswell so maybe a few maintenance activities were being carried out. I can remember having a data centre on raw mains while they upgraded a UPS, yes its a risk but one that was calculated, but like all best calculations sometimes there is a wrong shout.
be interesting to find out what has happened here
So much can go wrong, systems may be designed to deal with an outage, uninterruptible power supplies and diesels etc to give system resiliance, this all fits in with how the data centre is desgined. Some skimp and only UPS the IT equipment, depending on the diesels kicking in to supply all the non critical equipment, things like air handling units chillers and chilled water circuits, things that dont need to be supported for 10 minutes or so, it cuts the cost of the UPS system - you only need to support say 2MW of IT infrastructure of a total site load of say 4 MW. trouble is your generators fail then you have no cooling etc and that impacts the running of the IT side of things.
The resilience of the system could have a weak point and its just unfortunate that its been uncovered today. Its a saturday aswell so maybe a few maintenance activities were being carried out. I can remember having a data centre on raw mains while they upgraded a UPS, yes its a risk but one that was calculated, but like all best calculations sometimes there is a wrong shout.
be interesting to find out what has happened here
Vaud said:
GT03ROB said:
It matters little whether they have 6 or 60....... ones gone down & the operation is down. This is not rocket science or a complex IT problem, it smacks of total incompetence.
Oh, I agree...Gassing Station | News, Politics & Economics | Top of Page | What's New | My Stuff