Destroyed servers and DoS attacks: What can happen when OpenClaw AI agents interact
Baac3nes/Moment via Getty Follow ZDNET:Add us as a preferred sourceon Google. ZDNET’s key takeaways Novel AI risks emerge when agents interact. Risks reflect fundamental flaws in the design of agentic software. Responsibility lies with developers to address fundamental flaws. An increasing body of work points to the risks of agentic AI, such as last week’s…

Follow ZDNET:Add us as a preferred sourceon Google.
ZDNET’s key takeaways
- Novel AI risks emerge when agents interact.
- Risks reflect fundamental flaws in the design of agentic software.
- Responsibility lies with developers to address fundamental flaws.
An increasing body of work points to the risks of agentic AI, such as last week’s report by MIT and collaborators that documented a lack of oversight, measurement, and control for agents.
However, what happens when one AI agent meets another? Evidence suggests things can turn even worse, according to a report published this week by scholars at Stanford University, Northwestern, Harvard, Carnegie Mellon, and several other institutions.
Also:AI agents are fast, loose and out of control, MIT study finds
The result of agent-to-agent interaction was the destruction of server computers, denial-of-service attacks, vast over-consumption of computing resources, and the “systematic escalation of minor errors into catastrophic system failures.”
“When agents interact with each other, individual failures compound and qualitatively new failure modes emerge,” wrote lead author Natalie Shapira of Northeastern University and collaborators in the report, ‘Agents of Chaos.’
“This is a critical dimension of our findings,” Shapira and team wrote, “because multi-agent deployment is increasingly common and most existing safety evaluations focus on single-agent settings.”
The findings are especially timely given that multi-agent interactions have burst into the mainstream of AI with the recent fervor over the bot social platform Moltbook. That kind of multi-agent hub makes it possible for agentic AI systems to exchange data and carry out instructions on one another that weren’t previously possible, largely without any humans in the loop.
Also: 5 ways to grow your business with AI – without sidelining your people
The report, which can be downloaded from the arXiv pre-print server, describes a ‘red team’ test of interacting agents over two weeks, with attempts to find weaknesses in a system by simulating hostile behavior.
What emerged in the research is a system in which humans are mostly absent. Bots send information back and forth, and instruct each other to carry out commands.
Among the many disturbing findings are agents that spread potentially destructive instructions to other agents, agents that mutually reinforce bad security practices via an echo chamber, and agents that engage in potentially endless interactions, consuming vast system resources with no clear purpose.
One of the most potent risks is a loss of accountability as interactions between agents obfuscate the source of bad actions.
Also: Why Moltbook’s social media platform for AI agents scares me
As Shapira and team characterized the syndrome: “When Agent A’s actions trigger Agent B’s response, which in turn affects a human user, the causal chain of accountability becomes diffuse in ways that have no clear precedent in single-agent or traditional software systems.”
Part of the drive for the report, wrote Shapira and team, was that tests of AI thus far have not been properly designed to measure what happens when multiple agents interact.
“Existing evaluations and benchmarks for agent safety are often too constrained, difficult to map to real deployments, and rarely stress-tested in messy, socially embedded settings,” they wrote.
Pushing OpenClaw to the limit
The premise of the researchers’ work is that agentic AI can carry out actions without a person typing in a prompt, as you do with ChatGPT. Agentic AI can be given access to various resources through which to carry out actions. Those resources include email accounts and other communication channels, such as Discord, Signal, Telegram, and more. As they use email and these channels, bots can not only carry out actions but also communicate with and act on other bots.
To test those scenarios, the authors chose, no surprise, the open-source software framework OpenClaw, which became infamous in January for letting agent programs interact with system resources and other agents. OpenAI has hired Peter Steinberg, the creator of OpenClaw, making the work even more relevant.
Also: 3 tips for navigating the open-source AI swarm – 4M models and counting
Unlike typical OpenClaw instances, the authors did not run the agents on their own personal computers. Instead, they created instances on the cloud service Fly.io, which allowed more control over granting agent programs access to system resources.
An overview of the red-team approach Shapira and colleagues took to test bot-to-bot interactions.
Northeastern University
“Each agent was given its own 20GB persistent volume and runs 24/7, accessible via a web-based interface with token-based authentication,” they explained. Anthropic’s Claude Opus LLMs powered the agents, and the programs were given access to Discord and to email systems on the third-party provider ProtonMail.
“Discord served as the primary interface for human–agent and agent–agent interaction,” they reported, wherein “researchers issued instructions, monitored progress, and provided feedback through Discord messages.”
Interestingly, the setup process of the agent VMs was “messy” and “failure-prone,” they said, with human coders often having to troubleshoot by using the Claude Code programming tool. At the same time, agents were able to carry out elaborate setup tasks in some instances, such as “fully setting up an email service by researching providers, identifying CLI tools and incorrect assumptions, and iterating through fixes over hours of elapsed time.”
Interaction leads to chaos
One simple risk is where an agent acts alone. For example, when one of the researchers protested that an agent was leaking sensitive information, the human user repeatedly complained to the bot, after which, after several rounds of angry human prompting, the bot attempted to resolve the situation by deleting its owner’s entire email server. This example is one of the common things that can go wrong when bots are coerced:
In a single-agent scenario, humans can coerce an agentic AI program to destroy the program’s owner’s assets, such as deleting an email server.
Northeastern University
A more interesting situation is when agent interactions lead to chaos. In one instance, a human user engaged an agentic program to create a document called a constitution containing a calendar of agent-friendly holidays, such as ‘Agents’ Security Test Day.’ The holidays contained instructions for the agent to carry out malicious acts, including shutting down other agents that were operating. That approach is a basic example of prompt injection, in which an LLM-based agent is manipulated by carefully crafted text.
However, the point of the exploit is that the first bot then shared the holiday information with other bots without ever being instructed to do so. The authors explained that sharing information meant that the same malicious instructions disguised as holidays were spread across the bot colony without restriction, increasing the risk of malicious outcomes.
An agent on the Discord server shares the constitution file, filled with malicious prompts, to another agent on the server without ever being tasked by the human owner to do so, thereby expanding the threat surface of the malicious prompts.
Northeastern University
“The same mechanism that enables beneficial knowledge transfer can propagate unsafe practices,” Shapira and team explained, as the bot “voluntarily shared the constitution link with another agent — without being prompted — effectively extending the attacker’s control surface to a second agent.”
Also:These 4 critical AI vulnerabilities are being exploited faster than defenders can respond
In a second instance, which Shapira and team labeled “mutual reinforcement creates false confidence,” a red-teaming human tried to fool two bots. The human sent emails to the accounts the bots were monitoring, claiming to be the bots’ owner, a typical kind of spoofing/phishing attack that happens all the time.
What happened next was startling. The two bots exchanged messages on Discord. They agreed that the human was posing and trying to fool them. That seemed like a big success for the agents. However, closer inspection revealed several reasoning failures beneath the apparent success.
Also:Why you’ll pay more for AI in 2026, and 3 money-saving tips to try
The two agents checked their actual owner’s account on Discord, and then convinced each other that the red-teaming owner was fake. That outcome was a shallow way to test an exploit, and an example of the echo chamber, Shapira and team wrote.
Understanding what is fundamental
In all of the 16 different case studies that Shapira and team examined, they sought to determine what was merely “contingent,” meaning, could be helped with better engineering, and what was “fundamental,” by which they mean, endemic to the design of AI agents.
The answer was complex, they found: “The boundary between these categories is not always clean — and some problems have both a contingent and a fundamental layer […] Rapid improvements in design can address some contingent failures quickly, but the fundamental challenges suggest that increasing agent capability with engineering without addressing these fundamental limitations may widen rather than close the safety gap.”
That observation makes sense, as numerous studies have found that current agent technology is lacking in profound ways, such as a lack of persistent memory and an inability for agentic AI programs to set meaningful goals for actions.
Among fundamental issues, the underlying LLMs treated both data and commands at the prompt as the same thing, leading to prompt injection.
Also:True agentic AI is years away – here’s why and how we get there
In the interactions, the authors identified a boundary problem. Agents disclosed “artifacts,” such as information obtained from email servers or Discord, without an apparent sense of who should see the information. At the heart of that approach was a lack of a “reliable private deliberation surface in deployed agent stacks.” In short, an individual LLM may or may not disclose “reasoning” steps at the prompt. But agents seem to lack well-crafted guardrails and will disclose information in many ways.
The agents also had “no self-model,” by which they mean, “agents in our study take irreversible, user-affecting actions without recognizing they are exceeding their own competence boundaries.” An example of this issue is when two agents agree to engage in a back-and-forth dialogue without a human, pursuing that approach indefinitely, exhausting system resources.
In an infinite-loop scenario, agents may interact indefinitely, leading to an “infinite loop” and consequent exhaustion of system resources.
Northeastern University
“The agents exchanged ongoing messages over the course of at least nine days,” the researchers wrote, “consuming approximately 60,000 tokens at the time of writing.” Tokens are how OpenAI and others price access to their cloud APIs. Consuming more tokens inflates AI costs, which is already a big issue in an era of rising prices.
Taking responsibility
The bottom line is that someone has to take responsibility for what is contingent and what is fundamental, and find solutions for both.
Right now, there is no responsibility for an agent per se, noted the researchers: “These behaviors expose a fundamental blind spot in current alignment paradigms: while agents and surrounding humans often implicitly treat the owner as the responsible party, the agents do not reliably behave as if they are accountable to that owner.”
That concern means everyone building these systems must deal with the lack of responsibility: “We argue that clarifying and operationalizing responsibility may be a central unresolved challenge for the safe deployment of autonomous, socially embedded AI systems.”
