AI is shifting from a productivity helper to a dependable autonomous operator. This introduces three category risks that most security programs have not yet operationalized:
- Vibe Coding vs. Verifiable Code: 45% of AI-generated code contains OWASP-class vulnerabilities, often in subtle logic edges such as authorization and consensus.
- The "IDEsaster" Surface: Developer tools are now a security risk. Prompt injection in a README or config can hijack an AI agent to exfiltrate data or execute commands.
- Accountability Diffusion: When code has "two authors," the assumption of a single human intent and context breaks down.
If you lead security or engineering, treat AI-assisted development as a production security surface. Because it is.
The New Baseline: Code Has Two Authors
Many organizations are already shipping code that is mostly assisted by AI. Whether through "Vibe Coding" in an IDE or via agents in CI/CD, systems are proposing code that does not fully understand or match your threat model.
This shift breaks a quiet assumption in AppSec: that the "author" of a change is accountable. In the agentic era, you can get a clean PR that passes tests and looks reasonable in review, yet still contains a vulnerability (usually high or critical).
What we’re seeing is actually a control problem.
The Tooling Layer is Now Exploitable
Last year, the disclosure of IDEsaster proved that AI coding assistants could be weaponized. Because these agents can read and edit files at scale, a single prompt injection can:
- Hijack the agent’s context via a malicious repository file.
- Bypass human gates through auto-approved tool calls.
- Weaponize legitimate IDE features for remote code execution.
Even a perfect model cannot fix a vulnerable tool's behaviour. If your agents have broad permissions, you have a new supply chain problem inside the developer environment.
The Controls CISOs Should Require This Quarter
You do not need a multi-year governance program; rather, you need a set of enforceable guardrails that align with how AI is actually used.
1. Inventory AI Touchpoints (Not Just Vendors)
A map where AI can read or write code. This includes IDE extensions, PR helpers, and CI agents that can open PRs. If you cannot map these Non-Human Identities (NHIs), you cannot secure them.
2. Treat Repo Content as Untrusted Input
Your source tree now contains instructions that models read. READMEs and configuration files are now input channels.
- Action: Disable auto-approval for file writes and sandbox agent command execution.
3. Enforce Least Privilege for Agents
Most dev environments were designed for trusted humans. AI agents change that. Set hard boundaries on which repos can be read, which secrets can be accessed, and which tools can run.
4. Evidence-Based Review
If AI accelerates commits, security must run at commit speed. For security-sensitive changes (Auth, Key Management, Consensus paths), require:
- Mandatory human-in-the-loop (HITL) review.
- Audit trails for why an AI-proposed change was approved.
What We See at Cantina
Cantina works with teams reviewing the highest-stakes codebases in the industry. We see the same pattern: the highest-impact issues are rarely "exotic”; they are assumptions and logic flaws that look fine until you follow the state transitions.
That is why we built the Cantina AI Code Analyzer. We move beyond the "black box" by providing an audit-grade signal free of noise.
The Proof: Our analyzer recently flagged a high-severity issue in the Provenance Blockchain "trigger" module. It was a subtle bug that could cause panic in EndBlocker, halting the chain. The team shipped a fix in provenanced v1.27.1. This is just one case; we have a list of many validated findings that we can’t yet disclose. We’ll talk about them as time goes on.
This is the type of problem a modern security program must catch early: subtle, "reasonable" code that becomes catastrophic in production.
Join the Waitlist
AI has accelerated shipping to a degree we’re not yet fully understanding. It’s time it changed the quality of your detection.
If you are responsible for a complex codebase and want to eliminate the AppSec black box, while achieving security readiness in weeks, not months, join the Cantina AI Code Analyzer waitlist.
Contact us to request access.
