A glimpse into cyber-security’s AI-driven future
A hacking conference reveals how machines will defend us
IT TAKES ONLY a brief chat with the organisers of Black Hat Asia to realise this is no ordinary conference. Whereas most professional get-togethers invite their guests to piggyback on the hotel Wi-Fi, Black Hat builds the network for its annual conferences in Las Vegas, London and Singapore from scratch, installing switches, access points, firewalls and monitoring sensors before the conference opens. The Network Operations Centre (NOC) must then defend it in real time from thousands of the world’s best hackers—not just the conference’s adversaries, but also those attending, who are explicitly tasked with attacking its infrastructure.
This year’s Singapore edition, held from April 21st to 24th, took place in the shadow of announcements from large tech companies that artificial-intelligence models could now outperform all but the best hackers. Anthropic’s Mythos, for example, the most prominent such model, is already said to have identified severe vulnerabilities in “every major operating system and web browser”. For most tech users, this feels like a watershed moment. For those at Black Hat, however, it is confirmation of what they have long seen coming.
Defending Black Hat is “orders of magnitude” harder than ordinary corporate cyber-security, says Neil “Grifter” Wyler who has run the NOC for 24 years, all but 6 of which have been alongside his colleague Bart Stump. Indeed, when the head of cyber-security for the Paris Olympics needed a model for his own security-operations centre, he spent a week with the NOC at Black Hat London. Part of the challenge is scale: a typical firm faces one or two attackers at a time whereas Black Hat must deal with thousands, many testing exploits freshly taught by world-class instructors. The other challenge is filtering: the NOC team must allow such coursework to happen while distinguishing it from real attacks.
What they see ranges from the trivial to the unsettling. Some of those attending used a weather app that leaked their GPS co-ordinates. Another was feeding their cat remotely through an app that others could have hijacked. Visits were logged to 81 unique adult-website domains.
But the same tools that spot compromised pet feeders catch nefarious activity. A few years ago a participant used the conference network to hack a water-treatment facility in America (Messrs Wyler and Stump are cagey about the details). Another hid behind the din of legitimate hacker traffic to attack government websites and payment systems. The NOC team traced him, sent him a message reminding him that doing illegal things from Black Hat was still illegal, then watched him close his laptop and walk away. Hackers on the other side of the world try their luck too. When the registration server was switched on, attacks began at once, including traffic that appeared to originate in Romania. “It would be a feather in their cap to take down Black Hat,” says Mr Wyler.
The team has used AI to defend the network for years, says Mr Wyler, against bots as well as humans. But the bots are becoming noticeably more skilled. “The problem is that the attacks have gone from taking a week to a day to hours or minutes.” The NOC team has, therefore, built a stack of AI tools to fight fire with fire.
Trevor, for example, an AI chatbot, can turn questions written in plain English into code that can navigate the NOC’s complex database. This helps get members of the team, many of whom are freelancers, up to speed more quickly. Another tool monitors the patterns of encrypted beacons—the small, regular check-ins that compromised devices send back to attackers’ servers—and uses machine learning to distinguish them from the millions of legitimate connections the devices make each day.
It was with the help of this tool that the computer of a Taiwanese journalist attending Black Hat was found to have been infected with malware: among the noise of normal traffic, it was making connections to an unfamiliar server at a metronomic cadence, repeating at intervals that no legitimate app would produce.
A third tool makes use of an AI agent to profile every device on the network, flagging unusual behaviour. Once the NOC saw suspicious traffic on the journalist’s laptop, the agent checked clues obtained from the network against information available on the internet to quickly identify the owner. The team used the conference’s registration database to confirm the match before compiling a report and informing both the journalist and his organisation.
Mr Stump says the NOC has seen a pattern across multiple Black Hat conferences in which Taiwanese participants show up with hacked devices. “Most of [the traffic] goes back to China,” he says. AI-powered attacks by nation-states or cybercriminals are likely to intensify.
The team thinks the AI race is only beginning. For Mr Wyler, the vulnerabilities discovered by Mythos, including some that have gone undetected for decades, are to be welcomed rather than feared. “We now know they’re there.”
All the same, cautions Mr Stump, the next two years will be turbulent, as more flaws will be uncovered; more breaches will occur as firms feed sensitive data into AI systems; and more insecure code will be written. If that transitional period can be handled responsibly, a new equilibrium may be reached that resembles the one now being left behind. One thing, says Mr Stump, is certain. “In a year there will be a new AI model that makes Mythos look like a toddler with a keyboard.” ■