Bank's IT debacle is useful reminder

The NatWest/RBS banking debacle that has unfolded over the past week is being pinned on a single [very expensive] mistake by a relatively inexperienced software technician.

Possibly an offshored IT bod, or possibly a local blunder depending upon which reports you believe.

Whatever, it's a cock-up of monumental proportions that could end up costing the bank as much as £100m in compensation, never mind the reputational damage and potential exodus of customers. It's a useful reminder to dust off those disaster recovery and business continuity plans and check whether they're up-to-date and suitably robust.

People tend to think about horrible happening such as fire or flood when it comes to disasters.

But given the reliance this industry has on software – and increasingly on cloud computing – what happens if the software packs up or the cloud bursts? I'm sure the providers of such services have reassured with all the usual blah-blah about built-in redundancy and automatic failovers etc etc.

But per the other recent high-profile mega-fail at Blackberry maker RIM, what happens in actuality? If the backup promises don't go quite according to plan, what would you do then?

Better to have a plan, than not have a clue.