When Robots Go Phishing
As artificial intelligence (AI) dominates business and news cycles, policymakers, public and private cybersecurity officers (CSOs), and engineers tasked with securing the systems underpinning society—from software to energy grids to military hardware—grapple with strengthening resilience to new AI-linked cyber threats. Cyber resilience as the EU or NATO defines it long predates AI, but the increasing role of the technology means that a broader approach is needed to account for its cross-cutting role in amplifying or combating a wide range of cyber threats. At the same time, there is a growing need for societal resilience, which the EU defines as “the ability not only to withstand and cope with challenges but also to undergo transitions, in a sustainable, fair, and democratic manner”. Fostering true societal resilience in the age of AI requires understanding how it is changing the cyber threat landscape and identifying where it can be used as a defensive tool.
As part of the fifth convening of the European Cyber Agora, co-organized with Microsoft and EU Cyber Direct at the European Union Institute for Security Studies (EUISS), GMF managed a workstream on “AI and Societal Resilience”. A series of workshops and panel discussions brought together a multi-stakeholder group of participants to identify and fill gaps in understanding between siloed AI and cybersecurity policy stakeholders. This paper contains key takeaways for strengthening cyber and societal resilience in the age of AI, and proposes recommendations based on these workstream activities.
The Changing Nature of Cyber Threats
As a dual-use technology, AI can amplify threats across domains, giving it cross-cutting implications for resilience. Major AI-related trends include:
-
Attacks directly target existing AI systems
-
AI increases attack volumes and scalability: As autonomous attack agents scan for vulnerabilities, risks remain high and constant. The October 2025 EU Agency for Cybersecurity (ENISA) Threat Landscape Report highlights as a notable threat, new “stand-alone malicious AI systems” that deploy customized tools. These tools are run on local servers and are increasingly difficult to detect. One month later, Anthropic reported “the first documented case of a large-scale cyberattack executed without substantial human intervention”.
-
AI further ramps up polymorphic malware, malicious software capable of continuously modifying its code: Attackers deploy malicious software that uses generative AI to constantly adapt, evolve, and evade detection by traditional defensive tools such as anti-virus software that identify static patterns or signatures.
-
-
AI is used to scale up and enhance traditional attacks
-
Criminal deployment of AI in ransomware attacks: Small- and medium-sized organizations are particularly vulnerable to having their cyber defense measures bypassed rapidly. According to a CrowdStrike survey of small- and medium-sized enterprises (SMEs) that reported cyber incidents, firms with fewer than 25 employees had the highest rate (29%) of ransomware attacks. AI-enabled cyber threats could have significant economic impacts. A Financial Times analysis of corporate filings of S&P 500 companies found that cybersecurity was the most-cited concern.
-
Turbocharged social engineering and digital fraud: AI lowers the barriers for scamming and phishing attacks and increases the likelihood that these attacks will be hyper-personalized since it accumulates personal information across the internet at an unmatched scale. Classic phishing attacks are becoming highly sophisticated as the quality of AI-generated deepfakes accelerates. Growing suspicion of everything read, heard, or seen undermines the trust necessary for a resilient society.
-
An increasing volume of AI-generated disinformation that intensifies discussion about its classification as a cyber threat: The debate over classifying disinformation as a cyber risk reveals gaps in capacities needed for societal resilience. Disinformation and the information space broadly have not historically been the domain of Western cyber agencies, which focus on technical responses to attacks.
-
-
As AI and machine learning increase in sophistication, risks unique to these systems are exploited by malicious actors
-
Data poisoning and prompt injection: According to research from Anthropic, the United Kingdom’s Security Institute, and the Alan Turing Institute, data poisoning attacks, in which malicious actors corrupt an AI system’s training data to modify its behavior or introduce backdoors, are increasing in sophistication. A particular concern is indirect prompt injection, which makes a large language model unable to distinguish between legitimate and malicious instructions hidden, for example, in a text or email. The rise of agentic AI increases the risks of such occurrences. The more information, such as personal files, email, calendar, credit card and health information, to which an AI agent has access the more dire the consequences of a successful injection.
-
-
Heightened risks in the military domain
-
Military AI requires specific governance frameworks: Military use of AI or AI-related cybersecurity risks are governed differently from those in the civilian realm. That is arguably unsurprising when the cost of an AI-enabled cyberattack on chemical and biological weapons systems could be cataclysmic. The integration of AI into military decision-making must be balanced with the importance of keeping humans informed. Existing civilian frameworks that regulate or create governance frameworks for AI, such as the EU AI Act, the Council of Europe Framework Convention on AI and Human Rights, or the US National Institute of Standards and Technology (NIST) AI Risk Management Framework, do not address or contain specific carveouts for military use of AI. An ongoing debate about the applicability of international humanitarian law to military AI, and specifically to lethal autonomous weapons systems, continues.
-
Hype and Reality: AI as a Defensive Tool
AI is not merely a cyber risk. It is also a defensive tool, aiding in the automation, detection, and classification of incidents. AI can be used to detect indirect prompt injection, or expand traditional red-teaming exercises, in which AI is deployed alongside manual testing to simulate adversarial attacks and uncover vulnerabilities in AI systems before deployment. Models can be used to defend against AI, triage, and respond to industrial attacks.
Integrating AI into a cyber defense toolbox, however, carries risks. Imperfect models are prone to hallucinations and bias, which in turn affects security and complicates decision-making. Evaluating the deployment of frontier models—general-purpose AI systems capable of a wide range of tasks—as a defensive tool that identifies potential risks is expensive and potentially prohibitive for smaller companies. It also requires end-to-end security across an entire AI stack. Even unconscious use or integration of AI into defensive tools meant to increase safety or security is risky. AI or machine learning-based cybersecurity compliance systems that promise to assess a firm’s regulatory compliance, for example, may access and screen sensitive data, potentially undermining the broader cybersecurity objective of protecting sensitive data if the AI model were compromised. Data poisoning and manipulation, especially in the military context, remains a major concern even in defensive circumstances.
Overreliance on AI as a defensive tool also presents dangers. First, it requires more AI professionals than are available. Workforce shortages, including few experts on AI and cyber, mean frequent short-staffing. Keeping humans informed so that they may analyze attack patterns is crucial. In the long term, reliance on AI solutions rather than training a new generation of CSOs could reinforce the lack of expertise. Second, AI implementation costs—from specialized hardware to processing, infrastructure, and staff—fall heaviest on the organizations that can least afford it. AI integration may only exacerbate problems caused by a lack of basic cybersecurity in many organizations, a perennial problem. Defining responsibility also becomes more difficult when many actors, from deployer to provider to platform, are involved in the final product. AI adoption as a defensive tool, especially when deployed in critical areas, requires caution, recognition of vulnerabilities, and a clear understanding of an AI tool’s ability to solve a specific problem.
Recommendations
Harnessing AI’s benefits while strengthening defenses against its risks to achieve societal resilience requires understanding the aforementioned changes and transforming lessons learned into the concrete actions that follow.
-
EU and national governments should bridge policy siloes through a multistakeholder “AI and Cyber Working Group”. Researchers, policymakers, and staff at the EU and national levels tasked with implementing cyber and AI legislation, and private-sector CSOs dealing with AI-related cyber threats, are often unaware of key developments in separate but related fields. Establishing a formal working group to bring together these actors in a more regular and structured way would help break down policy siloes and increase understanding of policy overlaps. ENISA launched an expert group on AI Cybersecurity in 2020, but it is inactive. External actors could now coordinate such an initiative, with support from EU and national governments. The group could complement the expanded role for ENISA as envisioned in the recently proposed revision to the Cybersecurity Act.
-
CSOs must lead in demystifying AI and emphasizing basic cyber hygiene and security by design. They should communicate cyber risks and remedies to employees through mandatory cyber training and continually reemphasize the importance of basic cyber defenses via regular testing. When AI solutions are appropriate, they must be implemented purposefully. This can help avoid a chicken-or-egg problem whereby AI tools are merely scaled to better defend against AI-enabled attacks when more basic measures to do this exist. Solutions that leverage AI need to address precise weaknesses or mechanisms. They should not be put blindly into production.
-
CEOs, board members, and other organizational leaders must prioritize cybersecurity to respond to the new scale of threats. Within organizations, raising management of cyber risks to an executive- and board-level priority can help ensure that basic measures and AI-enabled solutions are implemented appropriately and with sufficient resources. Cybersecurity professionals can promote buy-in by senior leadership concerned about cyber risks but lacking technical knowledge by emphasizing the potential impact on a business’s bottom line or reputation and quantifying risks, when possible, in financial or business rather than technical terms. In turn, executives should ensure that cybersecurity teams have appropriate budgets and access to leadership via frequent briefings. An organization’s AI strategy may require more nuance when it comes to cybersecurity, and leaders must be sufficiently educated on AI’s potential as a tool and a threat to adapt enterprise strategy accordingly. Leaders can also emphasize the organizational importance of cybersecurity by setting key performance indicators related to cyber risks.
-
Governments can incentivize public-private collaboration to strengthen resilience in critical sectors through an AI-cyber tax incentive program. The ramifications of cyberattacks and risks that come with insufficient protections can reverberate between the public and private sectors. Protecting private companies that provide critical infrastructure such as power grids or water treatment plants will require increased collaboration with government agencies as AI-enhanced cyber risks increase. In the private sector, executives can help bridge current gaps in cooperation and establish a strong cybersecurity workforce by supporting staff participation in certification programs, government fellowships, or continuing education in fields such as quantum security. Governments can boost such cooperation with an AI-cyber tax incentive program for businesses that engage in initiatives to address talent shortages. Such initiatives should also be pursued at EU or multilateral levels to ensure that smaller organizations or countries have access to adequate resources. This will be particularly true as AI-translation tools, for example, lead to increasing attacks on entities using less widely read languages.
Society’s ability to harness the promises and withstand the perils of AI requires that the basic systems underpinning our digital lives remain safe and secure. Understanding AI’s continuing impact on cybersecurity and crafting effective responses to emerging threats represent key foundations for long-term resilience.
The views expressed herein are those solely of the author(s). GMF as an institution does not take positions.