The Year Threat Intelligence Got Smart, Why 2026 Web3 Security Requires AI
Web3 entered 2026 after a year of unusually high losses. Industry tracking put 2025 losses above $3.35 billion across hacks, scams, and exploits, about 37% higher than 2024. The average loss per incident rose to roughly $5.3 million, even as the median fell.
That mix matters. It suggests fewer incidents paying out, but larger incidents when they do. Attackers are spending more time on operations that can clear meaningful size.
Supply chain compromise is a good example of this shift. Two incidents accounted for roughly $1.45 billion on their own. When teams share infrastructure, vendors, libraries, and deployment patterns, one weak link can hit many targets.
AI is scaling scams and social engineering
Most real losses do not start with a novel cryptographic break. They start with access.
Generative AI models are pushing social engineering past the usual filters. Between May 2024 and April 2025, reported generative AI enabled scam activity increased about 456% compared with the prior year. Deepfake voice and video, automated writing, and fast iteration make impersonation easier to execute and harder to catch.
For Web3 teams, this changes the threat model. Wallet drains, signer compromise, and admin access abuse are already common. AI lowers the cost of running those campaigns at volume.
AI is starting to generate exploits
AI does not need to be a perfect auditor to be useful to an attacker. It needs to find workable paths fast, then iterate.
In controlled evaluations against 405 historically exploited smart contracts, frontier AI models produced working exploits for 207 contracts, about 51%. When tested on vulnerabilities discovered after those models’ training cutoffs, top models still exploited 19 of 34 issues. In a separate simulation on 2,849 newly deployed smart contracts with no known bugs, AI agents uncovered two novel vulnerabilities.
Benchmarks do not map cleanly to real world success rates. They depend on the contracts chosen, the harness, and what counts as “working.” Still, results like these show that automated exploitation is feasible today and improving quickly.
Why point in time security breaks in production
Smart contract audits remain a core part of responsible shipping. They reduce risk at release time. They do not cover what happens after release.
The modern stack is wider than contracts. Protocols ship across multiple chains and dependencies, rely on off chain infrastructure, and integrate vendors and third parties. Upgrades and configuration changes keep moving the target. A review completed weeks ago can become stale after a dependency bump or a new deployment.
Meanwhile, incident timelines keep shrinking. On-chain assets move quickly, and attackers do not wait for business hours.
Security expectations are shifting toward evidence
As the Web3 ecosystem matures, the questions from partners, underwriters, and sophisticated users change. They want to know what you monitor, how you detect abnormal behavior, how you respond, and how you document fixes.
Security is becoming an operating capability. “We had an audit” is not a complete answer if you cannot show how the system is monitored and how decisions get made during an incident.
What AI driven threat intelligence adds
AI-driven threat intelligence helps because it scales vulnerability analysis beyond human throughput, without turning every signal into a page.
It can watch code changes, deployments, transactions, and infrastructure telemetry continuously, then surface patterns worth human attention. It can reduce noise by correlating signals and ranking issues by exploitability and impact, rather than by how many rules they triggered. It can also produce artifacts that matter during audits and reviews, showing a clear trail of what was detected, how it was triaged, and what was shipped to remediate.
AI does not replace expert review. It helps teams stay fast and disciplined after launch, when the threat model is live.
What we are building
At Cantina, we are building an AI code analyzer designed for signal over noise.
Most teams have tried tools that flood the backlog. That slows triage and makes it easier to miss what matters. We are focused on fewer findings, better explained, easier to act on.
The intent is straightforward. Give security and engineering teams a short list of issues that deserve attention, and enough context to fix them without a week of back and forth.
Join the waitlist
We've opened a pre launch early access cohort. If you want to run the analyzer on your codebase and see what it flags, and what it ignores, join the waitlist here.
