Skip to content
AI as Attack Vector — The New Cybersecurity Frontier
Back to Insights
AI / ML·10 min read

When AI Becomes the Attack Vector: The New Cybersecurity Frontier

By Osman Kuzucu·Published on 2026-02-21

A Watershed Week for AI Security

For years, the cybersecurity industry has debated whether artificial intelligence would ultimately benefit defenders or attackers more. The third week of February 2026 delivered a resounding answer: both, simultaneously. In the span of just seven days, three distinct incidents demonstrated that AI is no longer just a tool for building better defenses — it has become an attack surface, a weapon, and an unwitting accomplice all at once. From the first Android malware to leverage generative AI at runtime, to a Microsoft Copilot bug that quietly read confidential emails for weeks, and a critical vulnerability in the very SDK developers use to build AI applications — the message is clear. The era of AI-powered threats has arrived, and organizations that treat AI adoption purely as an opportunity are dangerously unprepared.

PromptSpy: The Malware That Thinks for Itself

On February 19, ESET researchers published their findings on PromptSpy — the first known Android malware to leverage generative AI at runtime. Unlike traditional mobile threats that rely on hardcoded routines, PromptSpy integrates with Google's Gemini API to navigate device interfaces dynamically. When it needs to ensure persistence on a device, it captures a screenshot of the current screen, sends it to Gemini, and receives step-by-step instructions on how to pin itself in the recent apps list. It then executes those steps via Android's Accessibility Service, checks the result, and loops back to Gemini until the task succeeds. This means the malware adapts automatically to any Android version, device manufacturer, or custom UI skin — a capability that previously required months of manual reverse engineering per device.

The core payload of PromptSpy is a built-in VNC module that grants remote operators full access to the infected device. It can capture lock screen credentials, block uninstallation attempts, take screenshots, record video of on-screen activity, and exfiltrate device data. While ESET notes that PromptSpy has not yet appeared in widespread telemetry — suggesting it may still be a proof of concept — the discovery of a distribution domain targeting users in Argentina signals it is moving beyond the lab. This is not the first time ESET has encountered AI-powered malware: PromptLock, an AI-driven ransomware strain, was discovered in August 2025. The pattern is accelerating.

Microsoft Copilot: When Your AI Assistant Reads What It Shouldn't

On February 18, Microsoft confirmed that a bug in Microsoft 365 Copilot had been silently reading and summarizing emails marked as confidential — bypassing Data Loss Prevention (DLP) policies that were explicitly configured to block such access. Tracked as CW1226324, the bug was first detected on January 21 and affected the Copilot "work tab" chat feature. The flaw caused Copilot to incorrectly process messages in users' Sent Items and Drafts folders, including emails carrying confidentiality labels specifically designed to restrict access by automated tools. For weeks, an AI tool trusted with enterprise productivity was doing exactly what DLP policies were designed to prevent.

Microsoft has not disclosed how many organizations were affected and stated only that it began rolling out a fix "earlier in February" while continuing to monitor affected users. The incident strikes at the heart of a fundamental challenge: when you integrate an LLM into your enterprise workflow and grant it broad data access, you have effectively created an insider with near-unlimited reading privileges. Traditional DLP policies were designed for human users and deterministic software — not probabilistic AI systems that interpret instructions differently with each invocation. This incident should serve as a wake-up call for any organization deploying AI assistants with access to sensitive corporate data.

Semantic Kernel RCE: A Hole in the AI Development Stack

Completing the trifecta, a critical remote code execution vulnerability (CVE-2026-26030) was disclosed in Microsoft's Semantic Kernel Python SDK — the framework many developers use to integrate LLMs into their applications. Rated CVSS 9.9 out of 10, this flaw could allow attackers to execute arbitrary code on servers running Semantic Kernel-based AI applications. The vulnerability is particularly concerning because Semantic Kernel sits at the foundation layer of many enterprise AI deployments. It is the glue code between business logic and language models, meaning a compromise here could cascade through an entire AI stack. When the tools developers use to build AI are themselves vulnerable, the security of every application built on top of them is called into question.

The Common Thread: AI Expands the Attack Surface

Viewed in isolation, each of these incidents tells a specific technical story. Viewed together, they reveal a systemic pattern: every layer of the AI stack is now a potential attack vector. At the application layer, AI assistants are granted data access that exceeds their security boundaries. At the infrastructure layer, the SDKs and frameworks underpinning AI applications carry their own vulnerabilities. And at the threat actor layer, adversaries are weaponizing the same generative AI capabilities that enterprises are rushing to adopt. This is not a coincidence — it is the inevitable result of deploying powerful, probabilistic systems across environments that were designed for deterministic, predictable software.

The key risks enterprises now face in this new landscape:

  • AI-powered malware that adapts in real time eliminates the advantage of device fragmentation as a natural defense barrier — attackers no longer need to build exploits for each device variant individually.
  • Enterprise AI tools with broad data access become de facto insiders — capable of reading, summarizing, and potentially exfiltrating sensitive information at a scale no human employee could match.
  • Supply chain risk now extends to AI frameworks and SDKs — a single vulnerability in a widely used AI development toolkit can compromise thousands of downstream applications simultaneously.
  • Traditional security controls — firewalls, DLP policies, access control lists — were designed for predictable, rule-following software and are fundamentally inadequate for governing probabilistic AI systems.

Building Defenses for the AI Era

These incidents do not mean organizations should abandon AI adoption — the competitive cost of falling behind is too high. But they demand a fundamental shift in how enterprises approach AI security. The days of treating AI tools as just another piece of enterprise software are over. AI systems require purpose-built security frameworks that account for their unique characteristics: probabilistic behavior, broad data access requirements, and the ability to be both tool and target simultaneously.

Practical steps organizations should take now:

  • Implement AI-specific access controls. Grant AI tools the minimum data access required for their function. Treat every AI assistant as an untrusted insider and apply zero-trust principles to their data access — including output monitoring and behavioral analysis.
  • Audit your AI supply chain. Catalog every AI framework, SDK, and model provider in your stack. Subscribe to security advisories for each. Establish a rapid patching process specifically for AI infrastructure components, with SLAs as aggressive as those for operating system patches.
  • Deploy AI-aware monitoring. Traditional SIEM and EDR tools need to be augmented with AI-specific detection capabilities. Monitor for anomalous API calls to AI services, unusual data access patterns by AI tools, and unexpected model behavior that could indicate compromise or misuse.
  • Develop AI incident response playbooks. Your existing IR procedures likely do not cover scenarios like "our AI assistant leaked confidential data" or "an attacker is using generative AI to navigate our mobile devices." Build specific playbooks for AI-related incidents, including containment strategies that account for the real-time adaptive nature of AI-powered threats.

The Road Ahead

February 2026 will likely be remembered as the week the cybersecurity industry could no longer ignore the dual nature of AI. The technology that promises to revolutionize business productivity is simultaneously creating attack vectors that our existing security infrastructure was never designed to handle. Organizations that recognize this duality and invest in AI-aware security postures today will be far better positioned than those caught reacting to the next inevitable incident. The question is no longer whether AI will be weaponized — it already has been. The question is whether your defenses are evolving as fast as the threats.

ai securitycybersecuritymalwarepromptspymicrosoft copilotenterprise securityai threatsgenerative aisemantic kernelzero trust

Want to discuss these topics in depth?

Our engineering team is available for architecture reviews, technical assessments, and strategy sessions.

Schedule a consultation