Two new reports from Anthropic and McAfee confirm what many in security have predicted: cybercriminals are no longer experimenting with AI – they’re operationalizing it.
Anthropic documented how a single actor used Claude Code to run a month-long extortion campaign against 17 organizations across healthcare, government, and finance. Reconnaissance, credential theft, lateral movement, data analysis, and ransom note generation were all orchestrated with minimal human input with ransom demands reportedly reaching into the hundreds of thousands of dollars.
McAfee showed how agentic AI is reshaping social engineering. These systems now coordinate across email, text, voice, and video. Ignore a fake LinkedIn job offer, and the same AI might follow up with a voicemail in your boss’s voice, then a deepfake video call from “IT support.” Each failure makes the system smarter for the next attempt, leading to phishing success rates that are already quadruple those of human-crafted campaigns.
A SHIFT IN THE ECONOMICS OF CYBERCRIME
From Tools to Autonomous Operators
Anthropic's August 2025 Threat Intelligence Report reveals a fundamental shift in how cybercriminals operate. Where attackers once used AI as a sophisticated tool, we're now seeing AI agents functioning as autonomous operators. These aren't chatbots helping write phishing emails, these are systems that can:
- Execute multi-stage attack campaigns spanning weeks or months
- Automatically pivot strategies based on target responses
- Scale personalized attacks to thousands of victims simultaneously
- Learn and improve from each failed attempt
The implications are profound. One attacker with access to agentic AI can now operate with the reach and persistence of a team. Traditional assumptions — that sophisticated attacks require highly skilled operators, or that defenders can trade speed for accuracy — no longer hold. Agentic AI collapses those trade-offs, enabling campaigns that are simultaneously fast, precise, scalable, and adaptive.
The New Attack Vector: Agentic Social Engineering
McAfee's research shows how agentic AI transforms social engineering from a static, one-size-fits-all tactic into dynamic, adaptive manipulation:
Multi-Channel Orchestration
Agentic AI systems now coordinate attacks across email, phone, text, and social media simultaneously. If you don't respond to the fake LinkedIn job opportunity, the system switches to an email about package delivery. If that fails, you might receive a text about suspicious account activity. Each attempt uses lessons learned from your previous reactions.
Behavioral Learning at Scale
These systems monitor social media, track job changes, and analyze communication patterns to build detailed behavioral profiles. They identify routines, vulnerabilities, and psychological triggers to exploit.
Real-Time Adaptation
Traditional red flags like "This seems suspicious" no longer end attacks, they trigger the AI to try different approaches. The system learns from resistance and adjusts its tactics accordingly, wearing down even skeptical targets through persistence and continuous refinement.
The result: phishing messages with a 54% click-through rate, compared to just 12% for human-crafted emails.
Why Traditional Defenses Are FALLING BEHIND
Human-speed processes can’t contain machine-speed threats. A ransomware crew empowered by AI can move from initial access to full network compromise in under an hour. Even the best response teams — those that measure containment in hours instead of days — will struggle against adversaries that can launch thousands of simultaneous, adaptive campaigns.
Human-Speed Response to Machine-Speed Attacks
Security teams analyze alerts and make decisions over hours or days. Agentic AI executes complex attack chains in minutes and adapts in real time.
Static Rules vs. Dynamic Intelligence
Traditional tools rely on signatures and static rules. Agentic AI generates unique attack patterns for every target, rendering static detection obsolete before it can even be deployed.
Volume vs. Sophistication Trade-offs
Security operations have historically assumed attackers must choose between scale and complexity. Agentic AI removes that constraint, delivering sophisticated, personalized attacks at massive scale.
FIGHTING FIRE WITH FIRE
Defenders don’t just need more analysts or more alerts — they need systems that can fight fire with fire. Agentic AI defenders can investigate alerts in parallel, correlate across data sources in real time, and adapt their tactics just as quickly as the adversaries do.
"We're witnessing the most significant shift in cybersecurity since the internet went mainstream," says Lior Div, CEO and Co-Founder of 7AI. "Traditional security tools were built for a world where humans launched attacks at human speed. That world no longer exists. We're now defending against AI adversaries that can execute thousands of sophisticated, personalized attacks simultaneously while learning and adapting in real-time. The only viable defense is AI agents that can match their speed, sophistication, and adaptability."
Speed Parity
Agentic AI defenders can match the speed of agentic AI attackers. While human analysts take hours to investigate complex incidents, AI agents can complete comprehensive investigations in minutes, correlating data from multiple sources and providing actionable conclusions at machine speed.
"The challenge facing cybersecurity isn't about awareness or capability, it's about physics," explains Yonatan Striem-Amit, CTO and Co-Founder of 7AI. "A sophisticated attack group can compromise a network, steal data, and disappear in under an hour. Even the best security teams are celebrating reducing mean time to response from days to hours. The math simply doesn't work. You can't defend against machine-speed attacks with human-speed responses, no matter how skilled your team is. The only way to win is to deploy AI agents that can investigate, correlate, and respond at the same velocity as the threats they're facing."
Adaptive Learning
Just as attacking AI agents learn from every interaction, defensive AI agents can continuously improve their detection and response capabilities. They don't just identify known attack patterns, they recognize the behavior of AI-generated attacks, even when the specific tactics are novel.
Scale Without Compromise
Agentic AI defense systems can provide sophisticated analysis for every alert, not just the subset that human analysts have time to investigate thoroughly. This eliminates the volume-versus-sophistication trade-off that have long constrained security operations.
The 7AI Advantage: Purpose-Built for the Agentic Era
At 7AI, we recognized early that the future of cybersecurity would be defined by the battle between AI agents. Our agentic security platform wasn't retrofitted with AI capabilities, it was designed from the ground up to deploy specialized AI agents that can:
"Others in our industry are trying to bolt AI onto systems designed for human operators," says Div. "They're putting racing stripes on horse-drawn carriages and calling them Formula 1 cars. True agentic security requires rethinking everything from the ground up. That's why we didn't modify existing SIEM or SOAR platforms. Instead, we built the first security platform designed exclusively for AI agents to operate autonomously. The difference in capability isn't incremental, it's exponential."
Autonomous Investigation
Our AI agents don't just automate existing processes, they reason through complex security scenarios independently. When an alert triggers, our agents automatically dispatch based on threat type, conduct parallel investigations across multiple data sources, and compile findings into actionable conclusions.
Dynamic Reasoning
Unlike rule-based systems that follow predetermined scripts, our agents adapt their decision-making process based on contextual understanding of each unique security scenario. This Dynamic Reasoning capability allows them to identify AI-generated attacks even when traditional indicators are absent.
Continuous Learning
Our agents continuously read threat intelligence, correlate global events with customer environments, and proactively hunt for emerging threats. As attacking AI agents evolve their tactics, our defensive agents evolve their detection and response capabilities.
"The old model of cybersecurity was reactive – wait for an attack, detect it, respond to it, then update your rules," notes Striem-Amit. "In the age of agentic AI attacks, that model guarantees failure. By the time you've detected one AI-generated attack pattern and written rules for it, the attacking AI has already generated a thousand new variants. Our agents don't learn from static threat feeds or yesterday's attack patterns. They learn from live, real-time interactions with threats as they emerge. It's the difference between studying yesterday's weather and having radar that shows you the storm that's coming."
The Stakes Have Never Been Higher
Security teams that fail to embrace agentic AI defense will find themselves in an impossible position: trying to defend against machine-speed, AI-powered attacks with human-speed, rule-based tools. The gap will only widen as attacking AI agents become more sophisticated and autonomous.
"Within 24 months, I predict we'll see the first Fortune 500 company brought down entirely by agentic AI attacks," warns Div. "Their business model disrupted, their customer trust obliterated, their competitive advantage eliminated. And it won't be because they had bad security teams or insufficient budgets. It'll be because they were fighting tomorrow's war with yesterday's weapons. The organizations that survive this transition will be those that recognize we're not just dealing with better tools for cybercriminals, we're dealing with a fundamental evolution in the nature of cyber warfare itself."
The organizations that will thrive in this new landscape are those that recognize the fundamental shift underway and invest in agentic defense capabilities now. This isn't about keeping up with the latest trend, it's about preparing for a future where the speed and sophistication of both attacks and defenses will continue to escalate.
The Future of Security: AI vs. AI
We're entering an era where cybersecurity success will be determined by the quality of your AI agents, not the size of your security team. The most effective defense against agentic AI attacks isn't giving co-pilots to more human analysts, it's better AI defenders.
At 7AI, we're already seeing this future in production environments across large enterprise customers.
This isn't about replacing human expertise. It's about amplifying it. When AI agents handle the essential "non-human work" of alert triage, data correlation, and threat investigation, human analysts can focus on high-value strategic work that truly requires human judgment and creativity.
The question isn't whether agentic AI will transform cybersecurity. It already has.