Thursday, August 5, 2010

News Update How DBS debacle unfolded

The systems-related failure on July 5 was the bank's largest ever, crippling over 1,000 DBS and POSB automated teller machines (ATMs) as well as its Internet and mobile banking services for at least seven hours. -- PHOTO: TNP


TWO bungling IBM engineers and a faulty cable were all it took to cause the biggest bank network crash here in recent years.
They ignored the correct steps to change the cable - prompted by DBS Bank's mainframe computer - and used a wrong procedure, not once but four times at DBS Bank's data centre last month.
On the fifth time, the data storage system, linked to the mainframe computer by the troublesome cable, shut itself down, taking the entire network with it.
Till the meltdown, the storage system had hummed away as before, said DBS and IBM in a joint statement released on Wednesday.
There was no hint of disaster as all systems were fully functional at the first alert and the problem was classified as "low severity:"
It looked like just a routine service call. DBS had also several layers of back-up should anything fail. But the engineers' chain of errors over 30 hours changed all that.