You can harden smart contracts, run multiple audits, and put a bug bounty on top. If the systems around those contracts are soft, attackers will still find a way to move money.

For most teams, that soft underbelly is Web2 (dashboards, contributor portals, admin consoles, email providers, cloud consoles). All the boring parts everyone stops thinking about once the chain is live.

From an attacker point of view, those systems are easier to reach, easier to experiment with, and often less monitored than a contract that has been through three different auditors. That is why so many incidents that get described as Web3 hacks actually begin in very traditional places.

When the hack is not on chain

If you have been in security or DeFi for a while, you have seen the pattern.

Users visit what looks like the right app, sign something that looks normal, and watch their balances disappear. Later it turns out that nothing new was discovered in the protocol itself. The attacker changed a DNS record, or injected a script into the web app, or abused an email provider to send convincing phishing messages.

Technically that is not a smart contract exploit. The effect is the same. Money leaves wallets. Users lose trust. Leadership burns time on incident calls explaining how a very familiar failure mode slipped through the cracks.

From the outside it looks strange. The industry spends real money on contract audits, formal verification, competitions, and bounties. Then a single misconfigured web server or a sloppy SaaS integration becomes the real root cause.

Why attackers start from the web stack

Attackers do not care which part of your architecture is supposed to be the star of the show. They care about what you expose, how you expose it, and how hard you make it to test ideas.

In Web3 projects that have grown into actual businesses, the attack surface that meets those criteria is usually the Web2 half of the picture. Consider a fairly normal protocol stack.

There is a public facing app that serves as the main way users interact with the protocol. There are contributor and founder dashboards, investor portals, and internal consoles for operations. Behind that, there are APIs, databases, queues, scheduled jobs, and a collection of SaaS integrations for email, analytics, meeting records, and support.

Every piece of that chain carries some mix of identity, authorization, and access to sensitive actions. Most of it was built under shipping pressure by product engineers, not career security people.

If someone can change what users see in the browser, they can usually stage convincing wallet draining flows without touching the contract. If they can get a foothold in a backend that talks to your cloud account, they can often reach secrets, keys, or deployment pipelines. If they compromise an email or notification provider, they have an instant route to targeted phishing against your users and staff.

None of this requires breaking the math behind your protocol. It just requires winning against the same Web2 mistakes that have existed for years.

What this looks like inside a real product

Imagine a platform that brings together community members, founders, funds, angels, and an internal team. It helps people discover projects, share updates, exchange feedback, and manage access to capital. The front end is built with a modern framework. The backend uses a typed RPC layer and a relational database. Authentication is handled by an external provider that supports wallets and browser logins. There are webhooks for meeting transcription, AI helpers for notes, and various dashboards for admins.

On a diagram, it all looks familiar. On a code level, the story is rarely as clean.

Role based access is declared in a dozen places. Some checks live in server components, others in shared middleware, others in handlers that grew over time. The product team moves fast, so there are legitimate reasons for special cases and quick patches.

When a security review starts on a system like this, the first few days often feel like reading a novel that has been through a few rushed rewrites. You trace a founder journey, then realise that the same object is handled differently in the investor view. You read the code that should suspend a user, then find legacy endpoints that ignore that field. You verify that an admin route is wrapped in the right guard, then see an exposed utility endpoint that returns almost the same data with fewer checks.

Nothing in that picture is exotic. It is what normal growth looks like. It is also exactly the texture attackers latch onto. One missing ownership check here. One incomplete suspension rule there. A webhook endpoint that trusts a field it should verify. A presigned file upload that assumes friendly clients and friendly file types.

From the inside, each tradeoff made sense at the time. From the outside, they add up to a map of stepping stones toward sensitive actions.

How a serious Web2 review changes the picture

The difference between a box ticking exercise and a useful Web2 review is where it starts and how deep it is willing to go.

A good review begins with the actual business context. Who are the actors. What do they want. What would a real loss look like in this particular system. That framing feeds into a threat model that includes the chain, the web stack, the cloud, and the third party tools, and does not privilege any of them just because they are trendy.

Reviewers then work the way an attacker would, but with more patience and access. They follow real user journeys through the front end and the APIs. They read the code that powers those journeys, including the middleware, background jobs, and integration hooks that never show up in the user interface. They keep a running list of ways identity, session state, and authorization can drift apart.

What comes back is not a collection of generic advice. It is a set of specific stories about how this particular system can be bent out of shape.

Here is how a founder could pivot into data that belongs to another startup. Here is how a suspended angel could keep acting through a forgotten route. Here is how an internal console could be reached through a webhook or a SaaS callback that accepts untrusted input. Here is how a misconfigured bucket or database permission could expose notes, recordings, or deal information.

Those stories matter because they are legible to engineers and leadership at the same time. They turn abstract risk into concrete fixes that can be planned, shipped, and verified.

What this means for security leaders

For a CISO or head of security, the lesson is not that smart contract audits are a waste of money. They are essential when you are running code that moves assets without intermediaries.

The lesson is that an audit strategy that stops at the solidity repository is incomplete. Real attackers move sideways. They target domain registrars, email providers, misconfigured consoles, forgotten staging environments, and old admin tools that never quite made it into the new access model.

Boards and regulators have started to treat Web3 security incidents as part of broader cyber risk rather than a separate category. That is a useful shift. It makes it easier to frame Web2 audits, cloud hardening work, and SaaS due diligence as part of the same risk program that covers contracts and keys.

If you are steering an organisation that runs a protocol with any meaningful user base, it is worth asking a few blunt questions.

Where could someone change what our users see without us noticing in minutes. Which internal tools can move money, change limits, or alter identities, and how well are those protected. Which vendors could send messages that look like they came from us, and how much access we give them by default. When was the last time anyone tried to break the web stack on purpose.

If the honest answers are vague, it is a signal that your Web2 surface has not had the same level of attention as your contracts and keys.

Where Cantina fits

Cantina exists because organizations kept asking for security audits that reflect the way modern protocols are built. That means mixing Web3 depth with very practical Web2 experience.

On the Web2 side, audits are led by elite researchers who have spent years looking at web apps, APIs, cloud setups, mobile code, and the glue logic that ties them to on chain systems. They do not arrive with a standard checklist. They arrive with a mental model of how attackers think and a willingness to read the whole stack.

For some clients, that means a focused look at a single new dashboard before launch. For others, it means a review that covers web, infra, and contracts in one engagement so they can walk away with a single picture of full surface risk.

If you are planning a launch, a major integration, or a new cycle of institutional onboarding, contact us to look at the quiet parts of your system.

FAQ

No items found. This section will be hidden on the published page.