Natwest/RBS glitch
Discussion
Fantic SuperT said:
hornetrider said:
smack said:
hornetrider said:
Fantic SuperT said:
"A junior technician in India caused the RBS computer meltdown which froze millions of British bank accounts, it was claimed last night"
http://www.dailymail.co.uk/news/article-2165202/Di...
That's the sensationalist Wail version who are reporting the Register article above.http://www.dailymail.co.uk/news/article-2165202/Di...
(I'm coming up to renewal!)
rich1231 said:
Hmmm I have to balance this attack on off shore resources with some observations.
Yep, but you also have to balance that attitude with the actual reality of what is happening in RBS [according to people who work there] - there is global reallocation of resources happening in other banks and across other industries, and it is not a bad thing per ss. That's just the reality of global economics. I suppose it's not helpful if the Daily Mail and ilk latch onto this.fido said:
Yep, but you also have to balance that attitude with the actual reality of what is happening in RBS [according to people who work there] - there is global reallocation of resources happening in other banks and across other industries, and it is not a bad thing per ss. That's just the reality of global economics. I suppose it's not helpful if the Daily Mail and ilk latch onto this.
Of course, but I don't believe that all off shoring is bad. The test team leads I worked with previously were very good, the on shore IT PM's were generally useless apart from 2, and the blame the Indians for everything culture was strong.rich1231 said:
the blame the Indians for everything culture was strong.
From my discussions with contractors who work there, it's not a blame everything. There is always background noise from disgruntled staff that have seen their pay cut. What they are unhappy with is:-a) moving out entire teams without any cost-benefit analysis
b) active promotion of offshoring through internel comms, but the opposite towards UK staff (the term 'offshoring' is actively discouraged)
c) the quality of the management and staff in offshore locations is very poor (productivity, absenteeism, reporting back to senior management)
honest_delboy said:
daily mail sez: "A source told the Mail the problems were exacerbated because the botched update was applied to both the banks’ back-up systems and the live computer"
ah, so bang goes the DR/failover scenario.
Again not just operator failure a failure of change management and management in general.ah, so bang goes the DR/failover scenario.
The person should not have had access to both systems, especially not during a change.
rich1231 said:
Again not just operator failure a failure of change management and management in general.
The person should not have had access to both systems, especially not during a change.
You've hit the nail squarely on the head there, Rich. It was the failure of change management that caused this.The person should not have had access to both systems, especially not during a change.
GregE240 said:
rich1231 said:
Again not just operator failure a failure of change management and management in general.
The person should not have had access to both systems, especially not during a change.
You've hit the nail squarely on the head there, Rich. It was the failure of change management that caused this.The person should not have had access to both systems, especially not during a change.
StuartGGray said:
In my experience, it's more change recording rather than change management...
Also I always like the...Onshore :'Where are your test scripts and back out scripts to prove it works and you can back it out?'
Offshore : 'Yes, we have done the necessary testing and have the scripts'
Onshore : 'Can we get a copy so that change management/prod dev can review them'
Everything goes quiet for a few days...
Onshore : 'This is not going to be released to any of the UAT testing environments until you prove it works, where is the logs?'
They release anyway to test environment as release person is offshore too, everything breaks in UAT, blame emails get sent to onshore boss about someone changing something and not telling them, therefore its not their fault and the person working on it is now on something else so can't assist.
Onshore finally get a review of their scripts and find out they have ticked it works for everything on the test, it does work as its hard coded with the data tailored for the test, but will never work in the live system.
More blame emails to onshore boss blaming onshore team for offshore development work, boss finally snapped after this happening for 6 months and sacked 1/2 the team after boss found out they were not solely resourced for his system, they were moonlighting for other teams as it meant they could charge more money against the bank for 'doing' more work.
Don't get me started on hiring staff offshore which onshore has no say in, then finding out the new person didn't know any of the technologies and onshore should give them some slack as they were learning... this on a realtime system that is part of prime brokerage, bringing in more than $1 billion a year for the bank. So they think their bug fixer/enhancement developers should be allowed to learn on the job instead of being trained prior by the offshore company charging the bank a lot of money for their services.
Funnily enough i interviewed a bloke about 5 years ago who used to contract for RBS, he said their change control was so strict you actually had to write down the commands (including syntax) of what you were going to type and gather evidence afterwards. If what you typed didn't match the script you would be in hot water.
I guess with less staff CC got shuffled to the bottom of priorities?
I guess with less staff CC got shuffled to the bottom of priorities?
Du1point8 said:
Also I always like the...
Onshore :'Where are your test scripts and back out scripts to prove it works and you can back it out?'
Offshore : 'Yes, we have done the necessary testing and have the scripts'
Onshore : 'Can we get a copy so that change management/prod dev can review them'
Everything goes quiet for a few days...
Onshore : 'This is not going to be released to any of the UAT testing environments until you prove it works, where is the logs?'
They release anyway to test environment as release person is offshore too, everything breaks in UAT, blame emails get sent to onshore boss about someone changing something and not telling them, therefore its not their fault and the person working on it is now on something else so can't assist.
Onshore finally get a review of their scripts and find out they have ticked it works for everything on the test, it does work as its hard coded with the data tailored for the test, but will never work in the live system.
More blame emails to onshore boss blaming onshore team for offshore development work, boss finally snapped after this happening for 6 months and sacked 1/2 the team after boss found out they were not solely resourced for his system, they were moonlighting for other teams as it meant they could charge more money against the bank for 'doing' more work.
Don't get me started on hiring staff offshore which onshore has no say in, then finding out the new person didn't know any of the technologies and onshore should give them some slack as they were learning... this on a realtime system that is part of prime brokerage, bringing in more than $1 billion a year for the bank. So they think their bug fixer/enhancement developers should be allowed to learn on the job instead of being trained prior by the offshore company charging the bank a lot of money for their services.
nail.. head.Onshore :'Where are your test scripts and back out scripts to prove it works and you can back it out?'
Offshore : 'Yes, we have done the necessary testing and have the scripts'
Onshore : 'Can we get a copy so that change management/prod dev can review them'
Everything goes quiet for a few days...
Onshore : 'This is not going to be released to any of the UAT testing environments until you prove it works, where is the logs?'
They release anyway to test environment as release person is offshore too, everything breaks in UAT, blame emails get sent to onshore boss about someone changing something and not telling them, therefore its not their fault and the person working on it is now on something else so can't assist.
Onshore finally get a review of their scripts and find out they have ticked it works for everything on the test, it does work as its hard coded with the data tailored for the test, but will never work in the live system.
More blame emails to onshore boss blaming onshore team for offshore development work, boss finally snapped after this happening for 6 months and sacked 1/2 the team after boss found out they were not solely resourced for his system, they were moonlighting for other teams as it meant they could charge more money against the bank for 'doing' more work.
Don't get me started on hiring staff offshore which onshore has no say in, then finding out the new person didn't know any of the technologies and onshore should give them some slack as they were learning... this on a realtime system that is part of prime brokerage, bringing in more than $1 billion a year for the bank. So they think their bug fixer/enhancement developers should be allowed to learn on the job instead of being trained prior by the offshore company charging the bank a lot of money for their services.
The best thing is they just say yes.. or go quiet and do/say nothing.
joe_90 said:
nail.. head.
The best thing is they just say yes.. or go quiet and do/say nothing.
Actually I'd go further and say that they say everything is OK< or go quiet even when pressed repeatedly. When the situation is about to go into meltdown and they are threatened with their intransigence / failure to reply being escalated up the management tree, then their reply pings back within seconds. This says to me that they did know the answer but were just refusing to answer.The best thing is they just say yes.. or go quiet and do/say nothing.
Of course if you do escalate it and their boss is of same mindset...
honest_delboy said:
Funnily enough i interviewed a bloke about 5 years ago who used to contract for RBS, he said their change control was so strict you actually had to write down the commands (including syntax) of what you were going to type and gather evidence afterwards. If what you typed didn't match the script you would be in hot water.
I guess with less staff CC got shuffled to the bottom of priorities?
Just because you document the wrong commands doesn't make them any less wrong I guess with less staff CC got shuffled to the bottom of priorities?
Change control is only as good as the people working under it want it to be. No matter what the actual process looks like.
Nothing to do with change control. Its pretty strict here.
The upgrade was succsfull, it just had performance issues. By monday they where about a day behind. On Tuesday when they backed it out someone formatted the messaging queue. Not sure what they thought was going to happen or if this was done in error. Could have been as simple as the UNIX/LINUX equvilant of sudo rm -rf * but not checking where you where in the system.
The change was implemented by UK based staff the recovery on the other hand wasn't.
The upgrade was succsfull, it just had performance issues. By monday they where about a day behind. On Tuesday when they backed it out someone formatted the messaging queue. Not sure what they thought was going to happen or if this was done in error. Could have been as simple as the UNIX/LINUX equvilant of sudo rm -rf * but not checking where you where in the system.
The change was implemented by UK based staff the recovery on the other hand wasn't.
Edited by Sexual Chocolate on Thursday 28th June 10:34
Gassing Station | News, Politics & Economics | Top of Page | What's New | My Stuff