The future of cybersecurity is essentially a digital version of "Rock 'Em Sock 'Em Robots" that pits offensive- and defensive-minded AI against each other—or at least that's the impression given by an NBC report on how the industry views AI.
"In recent months, hackers of seemingly every stripe — cybercriminals, spies, researchers and corporate defenders alike — have started including AI tools into their work," the report said. "LLMs, like ChatGPT, are still error-prone. But they have become remarkably adept at processing language instructions and at translating plain language into computer code, or identifying and summarizing documents."
NBC goes on to recount how Google has been discovering vulnerabilities with AI, how CrowdStrike is "using AI to help people who think they've been hacked," and how a startup called Xbow developed an AI that managed to "climb to the top of the HackerOne U.S. leaderboard" in June. (HackerOne has since divided its leaderboards into separate trackers for individual researchers and "collectives" like Xbow.)
Earlier this month I reported on CrowdStrike's warning about how North Korean operatives are using generative AI to create resumes, social media accounts, and other materials that are used to trick Western tech companies into hiring them, at which point they shift to using the AI tooling to communicate with co-workers, write code, and otherwise maintain a facade of normalcy while they collect their paychecks.
AI's utility for that purpose—as well as for summarizing documents—has been well-established. (Or at least better established than its ability to do sophisticated cybersecurity research on its own; it's still not particularly good at distilling facts.) But it seems a bit early to declare the arrival of the era of AI hacking, especially since it's often being used as a force multiplier rather than a fully automated solution.
Google vice president of security engineering Heather Adkins told NBC that she hasn't "seen anybody find something novel" with AI, and that it's "just kind of doing what we already know how to do." She also said "that will advance," but researchers, companies, and independent organizations alike have been assuring us that AI will "advance" far beyond its current limits since the 1960s, so we're operating on a long timeline here.
Xbow rising to the top of the HackerOne leaderboard is also interesting, but it also overlooks the sheer amount of "slop" produced by similar AI tools that promise to help security researchers find vulnerabilities. Daniel Stenberg, lead developer of the open source curl project on which practically every internet-connected device relies, has repeatedly bemoaned the amount of time he's wasted on "vulnerabilities" found by AI.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
"The general trend so far in 2025 has been way more AI slop than ever before (about 20% of all submissions) as we have averaged in about two security report submissions per week," Stenberg said in a recent blog post on this problem. (Emphasis his.) "In early July, about 5% of the submissions in 2025 had turned out to be genuine vulnerabilities. The valid-rate has decreased significantly compared to previous years."
Stenberg recently gave a talk on this problem, too, and it's worth reading his 2024 blog post on the phenomenon as well. The lead developer of a project used in more than 20 billion devices is spending this much time calling out—to say nothing of actually dealing with—this issue. How many other maintainers of open source projects are being overwhelmed by similar problems, but without the same degree of visibility?
So AI has proven useful in social engineering attacks like the North Korean tech worker scheme, sped up the rate at which Google's researchers can discover vulnerabilities, and found ways to game the HackerOne leaderboard. NBC also reported that Russian hackers have started embedding AI in malware used against Ukraine to "automatically search the victims’ computers for sensitive files to send back to Moscow."
But it hasn't found many interesting vulnerabilities on its own, it's bombarding open source projects with irrelevant reports, and we have no idea if the AI used to find sensitive files on Russia's behalf produced worthwhile intelligence. (Especially if an AI was then asked to summarize the compromised documents and then hallucinated some juicy intel because it didn't want its virtual family to be sent to the gulag.)
Is that enough to declare the era of AI hacking, or is it just another side effect of the broader interest in AI being fueled by exorbitant spending by the world's largest tech companies, investment trends from the venture capital class, and geopolitical conflicts over what the two preceding groups have declared the industry of the future?
Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.