Natwest/RBS glitch

Author
Discussion

onyx39

11,133 posts

151 months

Wednesday 27th June 2012
quotequote all
Fantic SuperT said:
hornetrider said:
smack said:
hornetrider said:
Fantic SuperT said:
"A junior technician in India caused the RBS computer meltdown which froze millions of British bank accounts, it was claimed last night"
http://www.dailymail.co.uk/news/article-2165202/Di...
That's the sensationalist Wail version who are reporting the Register article above.
Well if this is the kick needed to bring these kinds of job back to the UK then let the Wail sensationalize it all they want I say.
For what it's worth, I thoroughly agree!

(I'm coming up to renewal!)
So do I, especially since I spoke to the DM journos last night and gave them the story.
smile

hornetrider

Original Poster:

63,161 posts

206 months

Wednesday 27th June 2012
quotequote all
Fantic SuperT said:
So do I, especially since I spoke to the DM journos last night and gave them the story.
rofl

How much do they pay for that?!

fido

16,849 posts

256 months

Wednesday 27th June 2012
quotequote all
rich1231 said:
Hmmm I have to balance this attack on off shore resources with some observations.
Yep, but you also have to balance that attitude with the actual reality of what is happening in RBS [according to people who work there] - there is global reallocation of resources happening in other banks and across other industries, and it is not a bad thing per ss. That's just the reality of global economics. I suppose it's not helpful if the Daily Mail and ilk latch onto this.

rich1231

17,331 posts

261 months

Wednesday 27th June 2012
quotequote all
fido said:
Yep, but you also have to balance that attitude with the actual reality of what is happening in RBS [according to people who work there] - there is global reallocation of resources happening in other banks and across other industries, and it is not a bad thing per ss. That's just the reality of global economics. I suppose it's not helpful if the Daily Mail and ilk latch onto this.
Of course, but I don't believe that all off shoring is bad. The test team leads I worked with previously were very good, the on shore IT PM's were generally useless apart from 2, and the blame the Indians for everything culture was strong.

fido

16,849 posts

256 months

Wednesday 27th June 2012
quotequote all
rich1231 said:
the blame the Indians for everything culture was strong.
From my discussions with contractors who work there, it's not a blame everything. There is always background noise from disgruntled staff that have seen their pay cut. What they are unhappy with is:-
a) moving out entire teams without any cost-benefit analysis
b) active promotion of offshoring through internel comms, but the opposite towards UK staff (the term 'offshoring' is actively discouraged)
c) the quality of the management and staff in offshore locations is very poor (productivity, absenteeism, reporting back to senior management)

McHaggis

50,760 posts

156 months

Wednesday 27th June 2012
quotequote all
Oakey said:
There is a cyber war going on...

  • adjusts tinfoil hat
There is always a cyber war going on. Just doesn't all get reported.

This however looks like incompetence.

honest_delboy

1,519 posts

201 months

Wednesday 27th June 2012
quotequote all
daily mail sez: "A source told the Mail the problems were exacerbated because the botched update was applied to both the banks’ back-up systems and the live computer"

ah, so bang goes the DR/failover scenario.

rich1231

17,331 posts

261 months

Wednesday 27th June 2012
quotequote all
honest_delboy said:
daily mail sez: "A source told the Mail the problems were exacerbated because the botched update was applied to both the banks’ back-up systems and the live computer"

ah, so bang goes the DR/failover scenario.
Again not just operator failure a failure of change management and management in general.

The person should not have had access to both systems, especially not during a change.

GregE240

10,857 posts

268 months

Wednesday 27th June 2012
quotequote all
rich1231 said:
Again not just operator failure a failure of change management and management in general.

The person should not have had access to both systems, especially not during a change.
You've hit the nail squarely on the head there, Rich. It was the failure of change management that caused this.

Boozy

2,349 posts

220 months

Wednesday 27th June 2012
quotequote all
GregE240 said:
You've hit the nail squarely on the head there, Rich. It was the failure of change management that caused this.
But change doesn't cause incident! Oh wait...

Apache

39,731 posts

285 months

Wednesday 27th June 2012
quotequote all
GregE240 said:
rich1231 said:
Again not just operator failure a failure of change management and management in general.

The person should not have had access to both systems, especially not during a change.
You've hit the nail squarely on the head there, Rich. It was the failure of change management that caused this.
it didn't fail so much as didn't exist

Murph7355

37,818 posts

257 months

Thursday 28th June 2012
quotequote all
Apache said:
it didn't fail so much as didn't exist
It rarely does in any meaningful sense. Lip service is paid to it in all large organisations.

As someone else mentioned, things like ITIL and Prince have much to answer for.

StuartGGray

7,703 posts

229 months

Thursday 28th June 2012
quotequote all
In my experience, it's more change recording rather than change management...

Du1point8

21,613 posts

193 months

Thursday 28th June 2012
quotequote all
StuartGGray said:
In my experience, it's more change recording rather than change management...
Also I always like the...

Onshore :'Where are your test scripts and back out scripts to prove it works and you can back it out?'

Offshore : 'Yes, we have done the necessary testing and have the scripts'

Onshore : 'Can we get a copy so that change management/prod dev can review them'

Everything goes quiet for a few days...

Onshore : 'This is not going to be released to any of the UAT testing environments until you prove it works, where is the logs?'

They release anyway to test environment as release person is offshore too, everything breaks in UAT, blame emails get sent to onshore boss about someone changing something and not telling them, therefore its not their fault and the person working on it is now on something else so can't assist.

Onshore finally get a review of their scripts and find out they have ticked it works for everything on the test, it does work as its hard coded with the data tailored for the test, but will never work in the live system.

More blame emails to onshore boss blaming onshore team for offshore development work, boss finally snapped after this happening for 6 months and sacked 1/2 the team after boss found out they were not solely resourced for his system, they were moonlighting for other teams as it meant they could charge more money against the bank for 'doing' more work.

Don't get me started on hiring staff offshore which onshore has no say in, then finding out the new person didn't know any of the technologies and onshore should give them some slack as they were learning... this on a realtime system that is part of prime brokerage, bringing in more than $1 billion a year for the bank. So they think their bug fixer/enhancement developers should be allowed to learn on the job instead of being trained prior by the offshore company charging the bank a lot of money for their services.

davepoth

29,395 posts

200 months

Thursday 28th June 2012
quotequote all
I wonder with things like this if the CEO ever thought "it'll save us £5m a year in staffing costs if it all goes according to plan - but what happens if it goes wrong?"

honest_delboy

1,519 posts

201 months

Thursday 28th June 2012
quotequote all
Funnily enough i interviewed a bloke about 5 years ago who used to contract for RBS, he said their change control was so strict you actually had to write down the commands (including syntax) of what you were going to type and gather evidence afterwards. If what you typed didn't match the script you would be in hot water.

I guess with less staff CC got shuffled to the bottom of priorities?

joe_90

4,206 posts

232 months

Thursday 28th June 2012
quotequote all
Du1point8 said:
Also I always like the...

Onshore :'Where are your test scripts and back out scripts to prove it works and you can back it out?'

Offshore : 'Yes, we have done the necessary testing and have the scripts'

Onshore : 'Can we get a copy so that change management/prod dev can review them'

Everything goes quiet for a few days...

Onshore : 'This is not going to be released to any of the UAT testing environments until you prove it works, where is the logs?'

They release anyway to test environment as release person is offshore too, everything breaks in UAT, blame emails get sent to onshore boss about someone changing something and not telling them, therefore its not their fault and the person working on it is now on something else so can't assist.

Onshore finally get a review of their scripts and find out they have ticked it works for everything on the test, it does work as its hard coded with the data tailored for the test, but will never work in the live system.

More blame emails to onshore boss blaming onshore team for offshore development work, boss finally snapped after this happening for 6 months and sacked 1/2 the team after boss found out they were not solely resourced for his system, they were moonlighting for other teams as it meant they could charge more money against the bank for 'doing' more work.

Don't get me started on hiring staff offshore which onshore has no say in, then finding out the new person didn't know any of the technologies and onshore should give them some slack as they were learning... this on a realtime system that is part of prime brokerage, bringing in more than $1 billion a year for the bank. So they think their bug fixer/enhancement developers should be allowed to learn on the job instead of being trained prior by the offshore company charging the bank a lot of money for their services.
nail.. head.
The best thing is they just say yes.. or go quiet and do/say nothing.

ExFiF

44,250 posts

252 months

Thursday 28th June 2012
quotequote all
joe_90 said:
nail.. head.
The best thing is they just say yes.. or go quiet and do/say nothing.
Actually I'd go further and say that they say everything is OK< or go quiet even when pressed repeatedly. When the situation is about to go into meltdown and they are threatened with their intransigence / failure to reply being escalated up the management tree, then their reply pings back within seconds. This says to me that they did know the answer but were just refusing to answer.

Of course if you do escalate it and their boss is of same mindset...

BMWBen

4,899 posts

202 months

Thursday 28th June 2012
quotequote all
honest_delboy said:
Funnily enough i interviewed a bloke about 5 years ago who used to contract for RBS, he said their change control was so strict you actually had to write down the commands (including syntax) of what you were going to type and gather evidence afterwards. If what you typed didn't match the script you would be in hot water.

I guess with less staff CC got shuffled to the bottom of priorities?
Just because you document the wrong commands doesn't make them any less wrong wink

Change control is only as good as the people working under it want it to be. No matter what the actual process looks like.

Sexual Chocolate

1,583 posts

145 months

Thursday 28th June 2012
quotequote all
Nothing to do with change control. Its pretty strict here.

The upgrade was succsfull, it just had performance issues. By monday they where about a day behind. On Tuesday when they backed it out someone formatted the messaging queue. Not sure what they thought was going to happen or if this was done in error. Could have been as simple as the UNIX/LINUX equvilant of sudo rm -rf * but not checking where you where in the system.

The change was implemented by UK based staff the recovery on the other hand wasn't.


Edited by Sexual Chocolate on Thursday 28th June 10:34