THE bank he rode the storm of chronic financial mismanagement by drawing on the taxpayers’ boundless generosity during the banking collapse of 2008. But what billions in bad loans couldn’t do, an apparently tiny computer glitch might yet achieve.
Over the next couple of weeks, the Royal Bank of Scotland’s senior management will be biting their nails as they wait to see how many of their account holders at Ulster Bank, NatWest and RBS consider pulling their custom. A simple flaw in a routine upgrade seems to have knocked the bank’s entire system off-kilter.
Already, there have been reports that doctors in Mexico threatened to turn off a dying girl’s life-support because NatWest did not transfer money owed to the hospital looking after her. At least one couple claimed to have seen a house purchase collapse because payment did not go through. Studying my own account, I notice that while money owed to me has not been paid in, cash has still been going out with ruthless efficiency – although I haven’t been able to establish whether it has reached its intended recipients.
For more than a decade, the banks have been encouraging us to carry out our business online because it saves them huge costs. If we do the work of administering our savings, they do not need so many branches, nor do they need to pay the tellers and bank managers who used to assist us in managing our money. But if that is the deal, then the banks have one unbreakable rule – they cannot allow their computer systems to fail. RBS just broke that rule, and may yet pay a heavy price.
It is, however, far from alone. In late 2011, and again in May this year, HSBC customers were unable to withdraw cash due to a computer malfunction. And the worrying truth is that almost all the basic infrastructure of our society is now controlled by computers. A similar glitch elsewhere could stop water flowing through our taps, food being delivered to supermarkets, electricity to our houses, or cash to our ATMs.
The NatWest fiasco demonstrates just how fragile this level of dependency on the web and computer networks makes us. It also highlights our lack of what security analysts refer to as “resilience”: the ability to adapt to the consequences of a major systems breakdown. This is not just technological but psychological. Last year, I was visiting a major think tank in Washington DC on the same day that BlackBerry’s email server went down. I observed an outbreak of collective neurosis as the staff, most of whom had some form of graduate degree, suffered minor breakdowns as a consequence of not being able to check their email every five minutes. Similar behaviour was reported worldwide last week, when Twitter went down for a few hours.
Why are we so vulnerable to such disruption? The very genius of the internet is the fact that it connects everything. But this is also its Achilles’ heel. If a car breaks down, it will affect five or six people at most. Yet if the central computer controlling the traffic lights of London goes belly up, an entire city hits gridlock. And these networks are easier to break than you might think.
In 2008, a crackdown by the Pakistan Telecommunication Authority on YouTube, over anti-Islamic videos that were hosted there, resulted in much of the world losing access to the site. The censor had typed in the wrong instructions, and rather than blocking the site sent hundreds of millions of requests for it flooding to Pakistan Telecom’s network. Two years later, in 2010, Waddell & Reed Financial of Kansas attempted to execute an algorithmic sale of 75,000 futures contracts on American stock markets, valued at $4.5 billion. Its poorly written instructions triggered havoc, with automated trading systems misinterpreting the trade and wiping 1,000 points off the Dow Jones Industrial Average within minutes.
The real potential for disaster, however, can be seen in three separate events, all of which took place in 2007. In the first, Los Angeles Airport, one of the biggest in the world, seized up after cables supplying the internet to the US Department of Homeland Security burnt out. Twenty thousand passengers were penned into an area between tarmac and immigration for almost 24 hours before technicians were able to identify the cause of the problem.
The same year, the Metropolitan Police uncovered an al-Qaeda plot to blow up Telehouse in Docklands. This is the main internet hub for the United Kingdom: had the terrorists succeeded, our country would have suffered a technological heart attack. Indeed, in the third example, that is precisely what happened: hundreds of thousands of computers started “attacking” the network systems of Estonia. ATMs stopped working, along with the country’s main media outlets and most of the country’s administration. Estonia was in dispute with the Russian government at the time: Russia denied responsibility, even though the hostile computers were traced back there. In the end, Estonia had to cut off its entire internet from the outside world to contain the problem.
Worryingly, rogue viruses and malicious software aimed at disrupting national infrastructures are set to become a standard tool in the military arsenal. Within the past month, US officials have admitted to having developed two major viruses, Stuxnet and Flame, in collaboration with Israel as part of a covert campaign to undermine Iran’s nuclear programme. The capacities of criminal gangs and terrorists are less advanced – but having spent much of the past three years discussing computer security with online criminals for my latest book, I know that they still have the capacity to circumvent the cyber-defences of banks and other businesses with ease.
So while I have some sympathy with RBS – because it is incredibly challenging to manage such a complicated system – the bank’s misfortune offers us all an important warning. One glaring issue is that the British Government, like all others around the world, has yet to introduce regulation requiring banks and large corporations to report serious failings in, or hacks of, their systems.
Although they may yield to public pressure, RBS’s executives are not obliged to reveal the cause of this system meltdown. Indeed, industry and banking are resisting such compulsory reporting precisely because an admission of failure leads to a massive dent in a company’s reputation.
One way around this is to insist on anonymous reporting of breaches, so that governments are able to form a much better picture of any problems afflicting our major computer systems in both the private and public sector. But that isn’t all we need to be told about. Speaking as one of NatWest’s customers, I also want to know whether the bank’s IT system is serviced in house or by an outside contractor. This is critically important, since the computer security industry now recognises that among the top cyber-threats to business is outside contractors maliciously or carelessly allowing viruses into networks, or even stealing data.
The UK Government is already discussing what might happen in the event of an “Advanced Persistent Threat” succeeding – a computer attack or failure whose consequences inflict huge and widespread damage to our economy or infrastructure. But at the moment, neither the state nor the public really knows how vulnerable we are to the attacks or malfunctions to which large computer systems are subject on a daily basis.
This matters, because our lives have become so utterly dependent on such systems. Without a proper debate, we will be left floundering when the next crisis takes place – and believe me, there will be another one before very long.