Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email
When-AI-becomes-the-weapon-2

Artificial Intelligence

When AI becomes the weapon: How to get ahead in the AI cybersecurity arms race

Published December 9, 2025 in Artificial Intelligence • 11 min read

AI is reshaping cybersecurity, arming both hackers and defenders. Learn how to stay ahead in the fast-evolving AI cybersecurity arms race.

AI is reshaping cybersecurity, arming both hackers and defenders. Learn how to stay ahead in the fast-evolving AI cybersecurity arms race.

When Anthropic recently released its latest threat intelligence report, it revealed an alarming evolution in AI-powered attacks. Anthropic’s security team intercepted a lone hacker who had transformed artificial intelligence into a one-person ransomware enterprise. The attack demonstrated how cybercriminals could leverage AI to automate previously complex operations that previously required entire criminal organizations. The hacker used AI coding agents to systematically identify vulnerable websites and web services, then deployed machine learning models to write malicious code exploiting these vulnerabilities.

After stealing data, the attacker employed large language models to analyze and prioritize the stolen information based on sensitivity and extortion value, before sending automated ransom demands to targeted companies. The attacker successfully executed 17 ransomware incidents, demanding ransoms between $75,000 and $500,000.

What would traditionally require an entire criminal organization had been condensed into a single operator leveraging AI’s capabilities. “This was one person, doing what would normally take a whole group of operators in a ransomware gang to do,” said Öykü Işık, Professor of Digital Strategy and Cybersecurity at IMD. “This is a very recent and very real example of how things are evolving, and companies need to be prepared.”

Hand Hacker touch laptop or smartphone and Touching in graph Screen code of a media screen on the White background Technology Process System Business and hacks online concept Copy space
The research exposed shadow AI as a significant vulnerability, with one in five organizations experiencing breaches due to unauthorized AI tools used by employees

AI cybersecurity insights from industry

Işık’s warning is borne out by industry research. IBM’s latest Cost of a Data Breach Report 2025: The AI Oversight Gap revealed alarming weaknesses in AI security governance across organizations worldwide. While only 13% of companies reported breaches involving AI models or applications, a staggering 97% of those organizations lacked proper AI access controls. An additional 8% of companies admitted they did not know whether they had been compromised through AI-related attacks, suggesting the true scope remained hidden.

The research exposed shadow AI as a significant vulnerability, with one in five organizations experiencing breaches due to unauthorized AI tools used by employees. These shadow AI incidents cost an average of $670,000 more than breaches at firms with controlled AI environments. Meanwhile, 63% of breached organizations either lacked AI governance policies entirely or were still developing them, with only 34% of those with policies conducting regular audits for unsanctioned AI tools.

IBM’s research also found cybercriminals had rapidly weaponized AI capabilities, with 16% of data breaches involving attackers using AI tools – primarily for AI-generated phishing campaigns (37% of cases) and deepfake impersonation attacks (35%). The most common entry point for AI-related breaches occurred through compromised applications, APIs, and plug-ins within AI supply chains, resulting in 60% of incidents leading to data compromise and 31% causing operational disruption.

These statistics underscore a critical reality: as AI democratized both attack and defense capabilities, business leaders faced an unprecedented challenge in balancing innovation with security imperatives.

Hacker with their hood up
“The democratization of AI capabilities had indeed lowered entry barriers for cybercriminals.”

AI’s double-edged impact on cybersecurity attack and defense capabilities

The artificial intelligence revolution had created a parallel transformation in both cybersecurity threats and defenses, fundamentally altering how organizations approached digital risk management. Yenni Tim, Associate Professor in the School of Information Systems and Technology Management at UNSW Business School, identified this duality as central to understanding AI’s cybersecurity implications.

“There are two dimensions to consider: cybersecurity of AI and AI for cybersecurity,” Tim explained. “AI’s black-box nature makes securing its implementation and use more complex, while at the same time, AI provides defenders with powerful tools like advanced pattern recognition for more accurate threat detection. But those same capabilities lower the barrier for attackers, who can exploit AI to scale and automate malicious activities.”

The democratization of AI capabilities had indeed lowered entry barriers for cybercriminals. Işık observed how traditional hacking requirements had diminished: “We do see, unfortunately, that cybercrime market is very lucrative. Recently, through the use of AI, the entry barrier to the cybercrime market is getting lower and lower,” she said.

The underground economy had rapidly adapted to these opportunities. Dark web marketplaces offer specialized large language models designed specifically for criminal purposes, with subscription services providing hacking capabilities for as low as $90 per month, according to Işık. “These criminals move very fast, and they are very agile. They’re not bound by rules or governance mechanisms that organizations need to comply with.”

IBM’s research confirmed this trend, revealing that 16% of data breaches involved attackers using AI, with AI-generated phishing attacks accounting for 37% of these incidents and deepfake impersonation attacks representing 35%. The speed and sophistication of AI-enabled attacks had outpaced many organizations’ defensive capabilities, according to research published in Harvard Business Review, which found that the entire process of phishing can be automated using LLMs, which reduces the cost of phishing attacks by more than 95% while achieving equal or greater success rates.

However, the defensive applications of AI offer substantial benefits for organizations willing to invest appropriately. “AI is also a great friend for cybersecurity, but unfortunately, that side is developing slower than the attack side,” Işık acknowledged. Organizations that implemented AI extensively throughout their security operations demonstrated measurably superior outcomes, reducing breach costs by $1.9m on average and shortening breach lifecycles by 80 days compared to organizations with limited AI security integration.

Tim emphasized that this technological arms race necessitated a fundamental shift in organizational thinking. “This is why the conversation needs to move from security alone to digital resilience,” she said. “Resilience provides a capacity lens to understand the extent to which a business can defend, respond, and recover from disruptions, including cyberattacks.”

Men who use websites or AI software technology to help and support tasks for chatbots AI chat visualization coding and data analysis using technology AI intelligent robot
This mindset required organizations to systematically evaluate their vulnerabilities from an attacker’s perspective.

Building digital resilience beyond traditional cybersecurity frameworks

The concept of digital resilience represented a paradigm shift from reactive security measures towards proactive organizational capacity building. Tim’s research highlighted this evolution as essential for addressing AI-powered threats that traditional cybersecurity approaches struggled to counter effectively.

“Resilience is often misunderstood as a technical issue – having the most advanced systems. In reality, it is a socio-technical capacity. Resilience emerges when assets and human abilities are mobilized together through activities that enable the organization to continue functioning, adapt to disruption, and advance over time,” she explained.

This framework comprised three interconnected layers that organizations needed to develop systematically. The foundational layer addresses assets and abilities that could be drawn upon during crises. The operational layer focused on activities that mobilized and coordinated these resources effectively. The strategic layer encompassed goals of continuity, adaptation, and advancement that guided resilience efforts.

“For AI-powered threats, this means leaders cannot stop at acquiring tools,” Tim explained. “They must also invest in building the abilities of their people to use AI effectively, securely, and responsibly. Only then can assets and abilities reinforce one another to support different objectives to collectively maintain resilience.”

Işık approached resilience through the lens of proactive threat anticipation. “I talk about organizations ‘thinking like a thief‘ to help protect themselves from a cybersecurity perspective. What do I mean? Since the advent of the web, organizations have managed, to a certain extent, to protect themselves by taking a very reactive stance on this issue. So, thinking like a thief is more about pushing them to be proactive.”

This mindset required organizations to systematically evaluate their vulnerabilities from an attacker’s perspective. “If I were a black-hat hacker, for example, how would I breach my systems? That kind of thinking is a great way to start thinking proactively on this topic,” Işık explained.

The human element is critical in building organizational resilience. Despite technological advances, Işık said attackers continue to target human vulnerabilities as their primary strategy. She observed that most LLM use cases target humans rather than technical vulnerabilities. “The human element remains the most targeted one in cybersecurity,” she said. “So, the better prepared we are from a behavior perspective, the better prepared organizations will be.”

The benefits of this approach are outlined in IBM’s AI cybersecurity research. Organizations that used AI extensively throughout their security operations saved an average of $1.9m in breach costs and reduced breach lifecycles by 80 days. This dual capability contributed to the first global decline in average breach costs in five years, dropping 9% to $4.44m, though recovery remained challenging with 76% of organizations taking more than 100 days to fully recover from incidents.

The distributed nature of AI adoption amplified this challenge.

Achieving cross-functional cybersecurity ownership in AI-enabled environments

The traditional approach of isolating cybersecurity responsibilities within IT departments had become inadequate for AI-enabled environments where technology decisions occurred across multiple organizational functions. Işık identified that a fundamental challenge is shifting organizational risk perception from technical to business responsibility. “It comes down to recognizing cyber risk as a business risk,” she said. “That’s really the starting point for genuine cross-functional ownership.”

Işık cited examples of high-profile failures that demonstrated the systemic nature of cybersecurity risks. In Sweden, 200 municipalities were locked down because of a cyberattack. “Apparently, these 200 municipalities all used the same cloud HR software provider – so this was a supply chain attack,” explained Işık, who noted that such incidents highlight how traditional risk assessment approaches failed to account for interconnected digital dependencies.

In response, she said effective cross-functional ownership requires embedding cybersecurity considerations within strategic planning and performance management processes. “Why don’t we make cyber resilience part of our organizations’ strategic planning cycles? And why don’t we help executives take responsibility by including this in their performance reviews?” Işık asked.

Another important step is to distribute accountability across business functions, based on decision-making authority. “Business executives need to see how their decisions change digital risks in the organization,” said Işık. “If we can hold them accountable for that, then that is a good starting point to distribute that risk across the organization and not just leave that responsibility to the Chief Information Security Officer.”

Tim’s research lab, UNSW PRAxIS, has a portfolio of ongoing projects on the responsible use of AI in businesses. Emerging findings from these projects show that this silo is a common and critical vulnerability that organizations need to address systematically. “This siloing is common: cybersecurity is often seen as an IT problem. But in AI-enabled environments, that view is no longer adequate,” Tim explained.

The distributed nature of AI adoption amplified this challenge. Unlike previous technologies that remained within controlled IT environments, AI tools have proliferated across business functions, enabling individual employees to make technology decisions with security implications. “AI amplifies this need because it is a general-purpose technology,” said Tim. “Individuals across functions now have greater influence over how technologies are configured and used, which means ownership must be distributed.”

She agreed with Işık’s perspective that traditional technological safeguards, while necessary, are insufficient without corresponding human capability development. “Technological guardrails remain essential, but they must be paired with knowledge building that cultivates stewardship abilities across the workforce,” she said. “When employees understand their role in shaping secure and responsible use, resilience becomes embedded across the organization rather than isolated in IT.”

A key driver of AI-enabled cyberattacks has been the rise of quantum computing, which Işık said presents medium- and long-term strategic risks.

The emerging quantum cybersecurity threat

 While current quantum capabilities remain limited to specific problem domains, broader accessibility could fundamentally alter cryptographic assumptions underlying digital security.

“The moment this capability becomes widely accessible – over the cloud, for example, this gives rise to a new range of threats,” said Işık. “You can even go to Amazon today, which has an S3 (simple storage service) cloud computing environment that you can block time on. So, we are slowly getting there.”

Threat actors have already begun preparing for quantum decryption capabilities through “harvest now, decrypt later” strategies, collecting encrypted data for future exploitation. “We know that they have already been doing this,” said Işık. “They are sitting on encrypted data that they will be able to decrypt with quantum computing capability, because the RSA encryption that we heavily depend on is breakable with quantum computers.”

Organizational preparation for post-quantum cryptography remained inadequate, despite available solutions. While quantum-safe encryptions exist and some institutions are actively vetting some of these encryption algorithms, Işık noted that organizations need to invest time and resources in the process and develop a roadmap for switching from RSA to quantum-safe encryption systems.

Executive awareness of quantum risks is particularly limited, Işık added. “It might be only one in 50 organizations that say they are on top of this if you were to question them about quantum-safe transition planning,” she said.

Executive leaders face the complex challenge of leveraging AI’s transformative potential while maintaining appropriate security postures that protect organizational assets and stakeholder interests.

Strategic leadership approaches for balancing AI ambition with cyber vigilance

Executive leaders face the complex challenge of leveraging AI’s transformative potential while maintaining appropriate security postures that protect organizational assets and stakeholder interests. Tim’s research suggests that successful leaders approach AI integration as both opportunity assessment and organizational stress testing.

“Leaders should treat AI integration as both an opportunity and a stress test of organizational resilience. The question is not simply how much you can scale or automate, but whether AI is being integrated in ways that strengthen the organization’s capacity rather than strain it,” Tim explained.

This perspective requires leaders to evaluate AI initiatives holistically rather than focusing solely on efficiency metrics. For leaders, she said this means considering broader implications, such as how AI fits with existing processes, how it shapes employees’ work satisfaction and capabilities, and whether it enhances rather than erodes organizational coherence.

“Most importantly, leaders need to view AI as part of a living system that evolves,” said Tim. “Short-term efficiency gains can easily create long-term fragility, which is why employees must be continuously supported to develop the stewardship capabilities needed to adapt these systems.”

Six AI cybersecurity questions for business leaders to consider

  1. Has the right risk tolerance for AI technologies been established, and is it understood by all risk owners?
  2. Is there proper balancing of the risks against the rewards when new AI projects are considered?
  3. Is there an effective process in place to govern and keep track of the deployment of AI projects within the organization?
  4. Is there a clear understanding of the organization-specific vulnerabilities and cyber risks related to the use or adoption of AI technologies?
  5. Is there clarity on which stakeholders within the organization need to be involved in assessing and mitigating the cyber risks from AI adoption?
  6. Are there assurance processes in place to ensure that AI deployments are consistent with the organization’s broader organizational policies and legal and regulatory obligations (for example, relating to data protection or health and safety)?

This article was first published by UNSW Business School in Sydney, Australia, and is republished with its permission.

Source: Artificial Intelligence and Cybersecurity: Balancing Risks and Rewards, World Economic Forum

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience