The biggest AI threats come from within – 12 ways to defend your organization
Quardia/iStock / Getty Images Plus Follow ZDNET:Add us as a preferred sourceon Google. ZDNET’s key takeaways AI is empowering both cybersecurity teams and cybercriminals. Consultancy EY urges CISOs to be proactive to minimize risk. The company shares 12 safety tips in a new report. It’s become a bit of a cliché to describe AI as…

Follow ZDNET:Add us as a preferred sourceon Google.
ZDNET’s key takeaways
- AI is empowering both cybersecurity teams and cybercriminals.
- Consultancy EY urges CISOs to be proactive to minimize risk.
- The company shares 12 safety tips in a new report.
It’s become a bit of a cliché to describe AI as a double-edgedsword, but that doesn’t make the phrase untrue.
Cybersecurity experts have been particularly vocal on this point. “AI amplifies defense through faster detection and response but simultaneously lowers the cost and complexity of attacks,” consulting firm EY wrote in a report published earlier this month called “AI and cybersecurity: The new frontier of business resilience.”
“While defenders use AI to identify threats, adversaries leverage the same technologies for deception,” the report said.
Also: Rolling out AI? 5 security tactics your business can’t get wrong – and why
The technology that’s making cybersecurity defenses more robust, in other words, is also empowering the cybercriminalswho are trying to break through those protections. Like Thor and Loki, or Batman and the Joker, the two foes constantly have to outpace and outmaneuver one another in what’s shaping up to be a long, possibly never-ending arms race. (On a related note, AI developers like OpenAI have their own security arms race to contend with: the better that their models can protect againstprompt injection attacks, the more cunning those attacks become.)
Counterintuitively, however, some experts say the gravest AI-powered threat to cybersecurity systems isn’t from external hackers. Instead, the biggest threat comes from within organizations themselves, when employees use the technology without adequate internal guardrails.
Following a watershed MIT studylast year, which found that over nine in 10 businesses’ AI initiatives have failed to produce meaningful results, there’s been a lot of debate around the value of a top-down approach to the technology (in which organizational leaders control how their employees use it) and a bottom-up approach (where employees are given more freedom to experiment with different tools). And according to Dan Mellen, EY’s global cyber chief technology officer, taking a bottom-up approach to cybersecurity in the age of AI is asking for trouble.
Also:Will AI make cybersecurity obsolete, or is Silicon Valley confabulating again?
“Organizations should absolutely take a top-down approach to implementing security guardrails around employees’ use of AI,” Mellen told ZDNET. Compared with external threats, such as prompt-injection attacks, said Mellen, “the use of ungoverned intelligent tools by insiders … presents a significantly greater risk to the enterprise.”
EY’s new report arrives at a time when AI agents are being peddled to businesses as productivity boostersfor employees. But while these systems’ capacity to build apps and handle a range of other complex tasks continues to grow, they still come with as-yet unresolved security concerns. The most notable concern is that agents’ greater autonomy comes with the potential for unexpected behavior. Evidence suggests agents are liable to behave unpredictably, sometimes with disastrous consequences.
Also:Why enterprise AI agents could become the ultimate insider threat
Mellen is, therefore, just one voice among a growing chorus of cybersecurity experts who have been raising alarms that the deployment of agents within businesses is outpacing the implementation of effective guardrails.
12 tips for CISOs
This risk-from-within paradigm is precisely what EY wanted to address in its new AI and cybersecurity report.
Also:AI threats will get worse: 6 ways to match the tenacity of your digital adversaries
Broadly speaking, the report urges CISOs to approach cybersecurity with as much top-down visibility as possible: clearly mapping out how, where, and why AI is being used internally, and formulating action plans for when those systems behave unexpectedly.
Here are the company’s 12 strategic recommendations at a glance:
- First and foremost, develop internal AI governance policies. These policies should cover key considerations, such as how, where, when, and why the technology can be used, and which data models can access.
- Expand your horizon of possibilities. According to EY, cybersecurity professionals have historically focused their use of AI mainly on defending against attacks. Moving forward, they should embrace a more offensive mindset, using AI “to identify and neutralize threats before they can impact systems,” the company wrote in its report, through exercises like red-teaming.
- Build a framework to measure the ROI of internal AI use that accounts for quantitative gains (such as time and money saved) and qualitative gains (such as enhanced security).
- Have a system in place to continually monitor your internal AI systems’ performance and their compliance with the ever-changing regulatory landscape.
- Going back to governance, make sure employees understand which uses of AI are acceptable and which aren’t, and how to respond in cases when models start to act in unexpected ways.
- Be able to visualize your organization’s internal use of AI. Build a dashboard that employees can access to gain a quick and clear overview of which models are in play, the datasets they’re using, training requirements, and so on.
- Expand your AI platform portfolio. Start adopting AI-powered tools designed for specific cybersecurity functions, including automated response tools likeSmiForceand security information and event management (SIEM) tools likeSentinelOne.
- Carefully map the data sources used by your internal AI systemsand where they’re traveling to, especially if you’re handling data across multiple jurisdictions with differing AI and privacy laws (for example, between the US and the EU). For an extra layer of security, consider implementing zero-trust architectures that treat any person or network attempting to access an internal database as a potential attacker that requires authentication.
- Train your employees to detect AI-generated scams, such as deepfakes and phishing attacks.
- Poke and prod your internal AI systems to try to detect and shore up vulnerabilities. Use red-teaming exercises to simulate prompt injection attacks and other scenarios. Implement multifactor authentication measures for agents undertaking sensitive tasks (ideally, one of those factors would be a human-in-the-loop to authenticate the agent).
- Join the broader conversation. Attend conferences hosted by organizations like the National Institute of Standards and Technology and the Open Worldwide Application Security Project to keep up with breaking developments in the ever-evolving field of AI-powered cybersecurity. Strike up conversations with other industry experts about emerging threats and the tactics that are being deployed to protect against them.
- Pay attention to the geopolitical chessboard.The limited supply of GPUs has become a major point of concern in the race between the US and China to build their respective AI industries. Keep an eye on shifting export controls and other factors that could limit your future chip supply, and plan accordingly.
