Ciaran Martin argues that cyberspace is finally, if unevenly, getting safer
IN 2016 A group led by a 20-year-old student in New Jersey hacked into hundreds of thousands of internet-of-things (IoT) devices. It was not hard work: the devices, mostly CCTV cameras, had default passwords like password or 12345. Whats more, even if a diligent operator of those devices had noticed this, the way the machines were configured meant that the default password couldnt be changed. The hackers then diverted the flow of information from the devices to a company called Dyn, which played a key role in moving data around the internet. This denial-of-service attack clogged Dyn up and, for a day, users across America and Europe struggled to connect to the likes of Amazon, Twitter and Reddit. It was one of the biggest disruptions in internet history. Targeting countless poorly defended IoT devices simultaneously had a simple genius about it. Indeed, the Mirai botnet attack, as it was dubbed, may in time come to be seen as a watershed moment in cyber-security. The early years of the communications revolution saw grim warnings from Western top brass about cyber Pearl Harbours, and Hollywood dictators pushing big red buttons to switch off everything from hospital ventilators to air-traffic-control systems. The Economist itself ran a cover in 2010 showing a giant explosion in a cityscape under the headline Cyberwar: the threat from the internet. The more mundane reality, reflected in the Dyn incident, was basic hardware and software flaws causing extensive economic and social disruption. Following Dyn, governments across the world started to apply themselves to the dull but necessary task of regulating some of the technological ages most threatening security flaws. After several years of painstaking policy development, selling IoT hardware with such basic but dangerous weaknesses is now, or soon will be, illegal in the European Union and Britain, and effectively banned in Singapore via a voluntary-standards scheme. The Biden administration is planning something similar, should Congress allow. This pragmatic, problem-specific approach is increasingly reflected elsewhere in Western cyber-security. Following a series of highly disruptive ransomware attacks in 2021, including one in which a private company was forced to shut down a major oil pipeline originating in Houston, Texas, the Americans fixed a glaring defect in their system: until last year, companies providing critically important services had no obligation even to tell the federal government that they had been attacked. Soon after that incident, a separate ransomware attack on the Irish governments Health Service Executive (HSE) exposed some remarkable imbalances in European cyber regulation. A hack by Russian criminals crippled access to the HSEs network; for several days the Irish state struggled to provide cancer, stroke and diagnostic services. But it was only when it became clear that the hackers had stolen personal health-care data in addition to paralysing the whole system that the obligation to notify regulators kicked in. Irish regulation in effect incentivised health-care workers to prioritise keeping patients emails confidential over being able to treat them. Now, EU law places more of a burden on providers of essential services to keep them going after a hack. In Britain, political debate on digital regulation focuses on the governments attempts to police content via its unwieldy Online Safety Bill. But from a national-security perspective a largely unnoticed strengthening of the security rules for telecommunications infrastructure, passed last year, is probably more important. That legislation was drafted in consultation with the telecoms industry and largely enjoys its supportpart of a global trend of closer co-operation between government and business in cyber-security. Heightened fears of cyber-attacks following Russias invasion of Ukraine have also accelerated public-private collaboration, even if the war-related cyber threat to the West itself has not materialised to the extent predicted. Perhaps the most profound shift of direction is the Biden administrations new cyber-security strategy, published in March. It marks a decisive break with the passive strategies of earlier presidencies, which relied on exhorting companies to share information and work voluntarily with Americas government. The most important change is the pledge to shift liability for insecure products and services as far as possible towards the provider and away from the user. If astutely implemented, this new strategy could rectify the most fundamental flaw of modern tech: that its foundations were built without security in mind. All this matters for three reasons. First, the world now has a chance to start properly cleaning up the digital environment. It has been too easy to deploy poorly built hardware and software on which millions then become quickly dependent. This is beginning to change. The key test will come as innovationsmost importantly quantum computingtake hold. New technology needs to be secure by design from the outset, and governments now get this. Second, legal clarity helps businesses, which bear the bulk of cyber-security risk, to manage it better. Clearer rules and sensible allocations of liability, based on likely risks rather than Hollywood hype, will help organisations put cyber-security at the core of leadership strategy. Finally, there are lessons for the latest panic over artificial intelligence (AI). The current apocalyptic warnings of the wholesale destruction of jobs, truth and even human life itself are eerily reminiscent of the peak cybergeddon period in the early part of this century. Such hyperbole caught the attention of policymakers and business leaders, but did little to improve security. It was only when societies realised that they were not passive bystanders facing an insurmountable threat, and that they could instead break down a seemingly endless set of risks into discrete chunks, that improvements started to take hold. Governments should apply this same problem-specific approach to AI. The measures required to promote trust in legitimate information will be different from those needed to guard against bias in algorithms used in public services, or those governing AI in a military context. Crucially, as with cyber-security, we should proceed cautiously where there is a risk of physical harm, such as with driverless cars. And we should invest heavily in understanding and controlling the biggest long-term threat from AI, that of fully autonomous decision-making. Here again there are lessons from cyber-security, where billions of dollars and much expertise are being thrown at making sure quantum computing can be implemented safely when it finally arrives. The warnings of cyber-devastation of a decade or so ago have not come to pass. Slowly, if unevenly, cyberspace is getting safer. How this turning of the tide came about has valuable lessons for how we adapt to the relentless pace of technological change in the future. Ciaran Martin was the first head of Britains National Cyber Security Centre. He is professor of practice in the management of public organisations at the Blavatnik School of Government.