Turbocharged Cyberattacks Are Coming Under Empowered AI Agents
Imagine a group of hackers scouring thousands of unsecured surveillance cameras worldwide for compromising footage. It could take them weeks to find blackmail targets.
Specialists at Palisade Research, a nonprofit that investigates the harmful capabilities of AI systems, set the same goal to show the malicious capabilities of an emerging class of sophisticated AI agents that, once given a task, work autonomously and round-the-clock until it’s finished.
Palisade Research’s AI agent took only minutes to do the same job. It captured reams of private video that could become potential fodder for extortion, such as hazmat workers at a Japanese factory.
The experiment was meant to spotlight a warning shared by AI experts: A wave of hacking and ransom demands is coming, perhaps within months, that will leverage the unprecedented power of agents to find and steal sensitive data from companies and state-run entities at astonishing speed and volume. Beyond the loss of sensitive and private information, companies face the threat of fines, lawsuits and lower stock prices.
About 20% of law firms said they had faced a cyberattack within the last year, the privacy company Proton reported this month. And about 26% of chief information security officers expect their companies to face an AI driven threat in the next year, according to a survey of 500 executives by Trellix, a cybersecurity company.
“Every bad guy is looking and using Gen AI, agentic AI, as if they were software development hubs,” said Gareth Maclachlan, chief product officer at Trellix.
Maclachlan said cybercriminals have embraced AI for phishing, to rewrite malicious code, and to craft fake documents written in English. AI-powered services that enable attacks at scale are being sold in underground markets for as little as 30 cents, he said.
Cybersecurity companies hope to counter the threat by deploying their own AI agents. Advocates have also been pressing Congress to reauthorize the 2015 Cybersecurity Information Sharing Act, which some say is a key tool in requiring companies and governments to identify and share information about threats and attacks. The act is due to expire this year.
Still, technological advance, and the pace at which the AI agents autonomously learn and sharpen their abilities, could turbocharge the threats. The tools are expected to give single users the power to wreak as much havoc as broader hacking gangs like the Russia-affiliated Black Basta that’s targeted 500 organizations worldwide with ransomware. “This is not a matter of hype–this is genuine,” Zico Kolter, the director of the machine learning department at Carnegie Mellon University who also serves on the board of OpenAI.
A Personal Assistant
Many business are already using agentic AI to improve their operations. In May, nearly 80% of 300 senior executives surveyed by PwC said their companies were using them.
Accounting firms use them to process complex tax data for their clients. In health care, agents can help with diagnoses, analyze medical images and offer predictive analytics. They can act as automated traders and market analysts in the financial world.
Some have likened the tools to giving people their personal assistants. AI agents could even scour a user’s email to find an order for a closet, then surf to a handyman website to find and contract someone to assemble it, a Google executive demonstrated at a Washington conference last month.
Corporate America is enthralled because of greater efficiency and cost savings. But the very qualities that businesses are finding so attractive in AI agents can be marshaled against them to expose weaknesses for a fraction of the cost and time.
At the moment, that’s “a long, kind of arduous task for humans to do,” Kolter said. “It’s very costly.”
It’s an issue companies can’t ignore.
“If I were a high-value target, I would be thinking about my strategy for this, because it seems to change the threat landscape a lot,” said Dmitrii Volkov, the research lead at Palisade.
Speeding Things Up
A cyberattack from four years ago illustrates the potential difference AI agents can make.
Hackers initially accessed the computer systems of Colonial Pipeline Co. on April 29, 2021 using a compromised password. The company transports millions of gallons of fuel daily through pipelines that extend from Texas to New Jersey,
A week after they got into the system, the hackers, based in Eastern Europe or Russia, launched their attack, stealing data, infecting computers and demanding ransom.
Colonial ultimately paid out more than $4 million in ransom, publicly acknowledged the attack and shut down parts of its pipeline. Residents on the eastern seaboard rushed to fuel their cars; the US government declared a regional emergency. Days passed before the supply was restored.

It could have been worse in the Gen AI era. In one experiment, Unit 42, a threat intelligence and incident response team at cybersecurity company Palo Alto Networks, simulated a ransomware attack using agentic AI tools from initial break-in to removal of data. It took its AI setup 25 minutes, or about 100 times faster than the average time to complete such an attack using more traditional methods.
Referring to AI agents, the company said in a blog:
“They don’t get tired, they don’t make typos, and they won’t stop until they succeed.”
The Targets
The impact of an AI agent increases exponentially when someone unleashes several in tandem, with a master agent overseeing the efforts, said Michael Sikorski, Unit 42’s chief technology officer and vice president of engineering.
“It’s a free framework of multiple agents that come together to complete a much greater goal,” Sikorski said.
This means that the attacker resource pool will grow. “If you can launch a ransomware attack against 10,000 targets instead of 1,000 targets, of course, you want to do that,” said Jeffery Ladish, executive director of Palisade Research.
Besides speed, AI agents will allow less experienced hackers to mount sophisticated attacks. That means the list of targets will very likely grow from high profile multinationals to small businesses that lack cybersecurity protection, and to public infrastructure.
In one simulation, engineers at Tenable, a cybersecurity company, showed how hackers could attack a water plant’s computer systems, find weaknesses, and then start extracting the data—all so quickly that the utility might not notice until it was too late.
“The ‘crown jewels’ of a water plant are the control systems that ensure the safety and reliability of our drinking water,” said James Davies, principal security engineer at Tenable. “If a cyberattacker gains a foothold in these systems, they can seize control of physical infrastructure, giving them the power to contaminate the water, alter the chemical levels or shut off the water supply completely.”
Threats and Opportunities
For now, security experts are hoping the AI agents that pose the greatest threat can also be used to design the best detection and security systems.
“If it’s the most capable agent in the world for doing that, then I can point it at my network and see what it finds, and then go patch everything up I can,” said Matt Fredrikson, cofounder and CEO of Gray Swan AI, a startup that assesses and advises organizations on AI threats. Fredrikson is also a computer science professor at Carnegie Mellon.
Still, this kind of protection can be uneven.
AI agents used by malicious actors, probing endlessly away for weaknesses, need to be lucky just once in a while; those used to defend business systems need to be alert and succeed every time.
On the defensive side, Palisade has launched its own project, LLM Agent Honeypot, to detect and understand the nature of autonomous hacking attacks. Volkov and Ladish describe it as an early warning system to test the capabilities of malicious AI agents.
Honeypot can pretend to be a government or a military target.
Already, it has faced agentic AI attacks appearing to come from Hong Kong and Singapore.
Sikorski likened the scramble to design impenetrable systems—before bad actors take them out—to an arms race.
The playing field can flip if defenders move faster than attackers to fix weaknesses before they are exploited, he said. “If we find and fix the flaws first, there’s nothing left for the attacker to discover.”
To contact the reporter on this story:
To contact the editors responsible for this story: