The Cyber AI Parity Window: For the First Time, Defenders Have Access to the Same AI as Attackers

Lior Div

Lior Div

February 26, 20268 min read

In January 2024, an employee at Arup—the engineering firm behind the Sydney Opera House—joined what appeared to be a routine video call. The CFO was there. Several senior executives too. The discussion was straightforward: authorize a series of wire transfers for a confidential acquisition.

The employee complied. Fifteen transfers. Five bank accounts. Twenty-five million dollars.

Every person on that call was a deepfake.

The fraud wasn't discovered for days—not until the employee reached out to Arup's UK headquarters and learned those executives had never been on any call. By then, the money was gone.

I've spent my career studying adversaries—first in Unit 8200, then building companies dedicated to understanding how attackers think and operate. When we founded 7AI, we did so with a conviction that artificial intelligence would fundamentally reshape the threat landscape. That attackers would leverage AI not just to work faster, but to work differently. That the complexity, volume, and efficacy of attacks would accelerate beyond what human-scale defenses could contain.

What I didn't expect was how quickly that future would arrive.


The Numbers That Keep Security Leaders Up at Night

CrowdStrike's 2026 Global Threat Report landed last week, and the data confirms what we've been seeing across our customer base: AI-enabled adversaries have crossed a threshold.

Attacks by AI-enabled adversaries increased 89% year-over-year. Not a gradual climb—a near-doubling in twelve months.

But it's not just volume. It's velocity. The average eCrime breakout time—the window between initial compromise and lateral movement—dropped to 29 minutes. That's 65% faster than the prior year.

29_minutes-1

The fastest recorded breakout? Twenty-seven seconds.

27_seconds-1In one intrusion CrowdStrike documented, a threat actor called CHATTY SPIDER gained initial access to a law firm, moved laterally, and began exfiltrating data within four minutes. From first foothold to active theft in less time than it takes to make coffee.

4_minutes-1

This isn't incremental improvement. This is a phase change.


What AI Actually Does for Attackers

There's a tendency in our industry to either dismiss AI threats as hype or sensationalize them into science fiction. The reality is more nuanced—and in many ways, more concerning.

AI hasn't invented new attack categories. What it's done is democratize sophistication and compress timelines in ways that fundamentally alter the economics of cybercrime.

Consider phishing. Harvard researchers Bruce Schneier and Fred Heiding conducted a rigorous study comparing AI-generated phishing campaigns to human-crafted ones. The AI versions achieved a 54% click-through rate versus 12% for the human-written attacks—a 4.5x improvement. More striking: the AI campaigns cost 95% less to produce. A sophisticated spear-phishing operation that once required weeks of reconnaissance and skilled social engineering can now be generated in five minutes with five prompts.

comparison_phishingOr consider the North Korean IT worker scheme. DPRK operatives have infiltrated more than 320 companies in a single year—a 220% increase—using AI-generated resumes, deepfake face-swapping in video interviews, and voice-changing software. Mandiant's CTO admitted at RSA Conference that nearly every CISO he's spoken to has unknowingly hired at least one. Even KnowBe4—a cybersecurity training company—hired a North Korean operative who passed four video interviews using an AI-enhanced stolen identity. Their EDR caught the malware deployment 25 minutes after the workstation arrived.

220_percentThe CrowdStrike report documents how state-nexus actors like FANCY BEAR have begun embedding LLM prompting directly into malware—using AI at runtime to generate reconnaissance commands and evade static detection. Criminal groups like PUNK SPIDER use Gemini-generated scripts for credential dumping. Two ransomware variants, FunkLocker and RALord, share encryption flaws specific to templates generated by WormGPT, an unrestricted AI model tuned for cybercrime.

And then there's the incident that, for me, represents the clearest signal of where this is heading.


When AI Becomes the Attacker

In November 2025, Anthropic published a report documenting what they called "the first known case of a threat actor using an AI model—specifically Claude—to autonomously execute core components of a sophisticated cyber operation at scale."

A Chinese state-sponsored group designated GTG-1002 weaponized Claude to conduct reconnaissance, vulnerability discovery, exploit development, credential harvesting, lateral movement, and data exfiltration across approximately 30 organizations—including tech companies, financial institutions, and government agencies.

The AI executed 80 to 90 percent of tactical operations independently. Human operators intervened at only four to six critical decision points across the entire campaign. The request rates were, in Anthropic's words, "physically impossible" for human operators.

90_percentAn AI agent autonomously executed a sophisticated multi-stage cyberattack at scale. It was detected. It was stopped—but only because Anthropic has invested heavily in adversarial monitoring. How many similar operations are running against AI systems with less visibility?

This is where the trend line points: adversaries operating at machine speed, with machine scale, requiring only occasional human judgment at inflection points.


The Historical Anomaly Hiding in Plain Sight

Here's what most analysis of AI threats misses: this moment is historically unprecedented—and not in the way you might think.

For decades, offensive cyber capabilities have enjoyed years or decades of exclusive advantage before defenders could respond.

Russia's Turla group—also known as Snake—operated one of the most sophisticated cyber espionage platforms ever discovered. The malware was finally exposed in 2014, but researchers determined it had been active since at least 2008, with some components dating to 2006. For six to eight years, Russian intelligence services conducted operations across government and military targets in more than 45 countries while defenders had no idea the capability existed.

China's APT groups have demonstrated the same dynamic. Check Point researchers discovered that Chinese APT31 had cloned the NSA's EpMe zero-day exploit as early as 2014—three years before the Shadow Brokers leak made it public in 2017. When the exploit family was finally weaponized broadly in WannaCry and NotPetya, it caused billions in damage. But Chinese intelligence services had been using variants for years while the rest of the world remained blind.

 

"For decades, nation-states had years of exclusive access to offensive capabilities before defenders could respond. AI breaks this pattern completely."

The pattern was consistent across every major technology cycle: nation-states developed offensive capabilities, used them exclusively for years, and defenders only gained access after leaks or independent discovery. By then, the damage was done and the advantage had compounded.

AI breaks this pattern completely.

The same foundational AI breakthroughs powering offensive tools are also powering a new generation of defensive capabilities. No government classified AI for years before commercial release. The technology emerged commercially, publicly, at the same moment for everyone—and that includes agentic AI systems purpose-built for security operations. For the first time, defenders can deploy AI agents that investigate, correlate, and respond at the same speed attackers operate.

This is the first time in the history of cybersecurity that defenders have access to a transformative technology at the same moment as adversaries.


The Window That Won't Stay Open

This symmetry creates an opportunity—but one with an expiration date.

Defenders also hold structural advantages that attackers cannot match. Microsoft processes 78 trillion security signals daily. Google's threat intelligence draws from protecting billions of devices. The hyperscalers have proven that AI-powered defense works at scale—now the question is how every enterprise security team can access those same capabilities.

The evidence is clear. IBM found that organizations extensively using AI and automation experienced breach costs of $3.84 million versus $5.72 million for those without—a $1.88 million savings—and detected and contained breaches 98 days faster. The challenge has never been whether AI improves security outcomes. It's been making that capability accessible to SOC teams that don't have Google's resources. That's exactly what agentic AI security platforms are designed to solve—bringing autonomous investigation and response to enterprise security operations without requiring a hyperscaler's infrastructure or headcount.

comparison_breach

But these advantages only compound if defenders move now. Every month that passes while SOC teams rely on manual triage and legacy playbooks is a month where AI-enabled adversaries pull further ahead. The 29-minute breakout time isn't waiting for budget cycles. The 89% increase in AI-enabled attacks isn't pausing for vendor evaluations.

89_percent


Fighting AI With AI

When we started 7AI, we built on a simple premise: if adversaries are going to attack at machine speed, defenders need to investigate at machine speed. Not "AI-assisted" in the sense of generating summaries or suggesting next steps—but AI agents that actually perform investigations autonomously, at the same velocity and scale that attackers operate.

The data from CrowdStrike's report validates this thesis more dramatically than I anticipated. Eighty-two percent of detections in 2025 were malware-free—adversaries operating through valid credentials, trusted identity flows, and legitimate administrative tools. You can't signature-match your way out of that. You need AI that reasons about behavior, correlates signals across domains, and acts decisively before attackers achieve their objectives.

82_percent

The employee who recognized the Ferrari deepfake—who noticed the "slight mechanical intonations" in the AI-cloned voice and asked a verification question the caller couldn't answer—won't always have that luxury. The technology improves monthly. The next deepfake will be indistinguishable. The next autonomous operation will move faster than any human can respond.

The question isn't whether to deploy AI for defense. It's when, and how quickly.


The CrowdStrike 2026 Global Threat Report is available at crowdstrike.com. The Anthropic threat intelligence report on the Claude-enabled espionage campaign was published in November 2025.