Organization
- @Layer-N
Engagement Type
Cantina Reviews
Period
-
Repositories
Researchers
Findings
Critical Risk
1 findings
1 fixed
0 acknowledged
Medium Risk
1 findings
1 fixed
0 acknowledged
Low Risk
4 findings
4 fixed
0 acknowledged
Informational
5 findings
4 fixed
1 acknowledged
Critical Risk1 finding
Permanent DoS via funding un-initialised PDA accounts
Severity
- Severity: Critical
Submitted by
m4rio
Finding Description
Because anyone can transfer lamports to any address, an attacker can pre-fund the deterministic PDA addresses that the bridge will later create. Several PDAs are instantiated with
system_instruction::create_account
:challenge_block
— challenge-nullifier PDAwithdraw
— effect-nullifier PDAset_permission
— ACL-entry PDA
create_account
(see lines 160-168 of the System Program) fails if the target account already holds lamports.In the surrounding logic of the aforementioned PDAs, the program also checks whether the PDA is already created; if it holds lamports it verifies the owner and reverts when the owner is not the bridge program, e.g.:
challenge_block.rs
#[account(mut, seeds = [ CHALLENGE_NULLIFIER_SEED, &bridge.key().to_bytes(), &block_id.to_le_bytes(), &validator.key().to_bytes(),], bump)]pub challenge_nullifier: AccountInfo<'info>, ..... pub fn challenge_block(ctx: Context<ChallengeBlock>, block_id: u64) -> Result<()> {... if **ctx.accounts.challenge_nullifier.lamports.borrow() > 0 { assert_eq!(*ctx.accounts.challenge_nullifier.owner, ctx.accounts.program.key()); return err!(crate::BridgeError::BlockAlreadyChallenged); }... }
By pre-funding these PDAs with a minimal lamport balance, an attacker can permanently block their creation and thus DoS the associated functionality. The most critical impact is on
challenge_block
, where a validator could be prevented from challenging a malicious block.Recommendation
Consider doing what Anchor is doing when it tries to create an account and detects that the account already has lamports:
- allocate
- assign
- make sure it's rent extempt
The following cases should be handled:
- If the lamports, the data field (discriminator), and the account ownership all match, and the account has valid data, we should revert with an "already initialized" error.
- If only lamports exist (lamports > 0), but the account is still owned by the system program or the data field is empty, we should perform manual account creation.
- If no lamports exist (lamports == 0) and the account is owned by the system program, this means the account has no data—so using the system instruction create_account is valid here.
Medium Risk1 finding
Unfinalized Blocks Cannot Be Pruned Due to Effects Execution Check.
Severity
- Severity: Medium
≈
Likelihood: Medium×
Impact: Low Submitted by
FrankCastle
Description
The current logic prevents pruning of unfinalized but expired blocks due to a strict check that compares
block.facts.effects_count
withblock.effects_executed
. This check assumes all block effects must have been executed, which is only valid for finalized blocks.However, approved but unfinalized blocks—especially those that are expired or challenged—may not have had their effects executed. Enforcing this check on such blocks causes a permanent revert when trying to prune them. As a result:
- The refund account is never refunded.
- The block accounts remain open indefinitely.
Additionally, for challenged blocks (marked invalid by validators), it’s expected that their effects should not be executed—yet this check still applies, incorrectly halting cleanup.
Here’s the problematic condition:
require!( block.facts.effects_count == block.effects_executed, BridgeError::BlockHasUnexecutedEffects);
This will always fail for unfinalized blocks, since
block.effects_executed
remains0
, whileblock.facts.effects_count
is non-zero.Recommendation
Restrict the execution check to finalized blocks only, where we are certain the effects must have been executed.
Suggested logic update:
let is_finalizable = IsChallengePeriodExpired { slot_current: Clock::get().unwrap().slot, slot_proposed: block.slot_proposed, slots_challenge_period: ctx.accounts.bridge.challenge_period_slots,}.run() && block.challenges < ctx.accounts.bridge.challenge_consensus_threshold; if block.approval == Approval::Finalized || is_finalizable { assert!(block.facts.effects_count >= block.effects_executed); require!( block.facts.effects_count == block.effects_executed, BridgeError::BlockHasUnexecutedEffects );}
This ensures:
- Finalized blocks are held to strict effect execution guarantees.
- Invalid or unfinalizable blocks can still be safely pruned.
Low Risk4 findings
Whitelisted Assets Cannot Be Disabled or Blacklisted After Initialization
Severity
- Severity: Low
Submitted by
FrankCastle
Description
The current asset configuration system allows for whitelisting assets using a deterministic PDA derived from the bridge and mint addresses:
#[account( init, payer = payer, space = 8 + AssetConfig::INIT_SPACE, seeds = [ ASSET_CONFIG_SEED, &bridge.key().to_bytes(), &mint.key().to_bytes() ], bump)]pub asset_config: Account<'info, AssetConfig>,
However, there is no mechanism to blacklist or disable an asset once it has been whitelisted. In other words, once an
asset_config
account is created, the protocol does not provide any built-in way to deactivate or restrict the asset's usage afterward.Attempts to prevent deposits by setting
min_deposit
to zero are also explicitly disallowed. This behavior is enforced in theset_min_deposit
instruction:This means once an asset is added, it remains usable indefinitely—even if a critical issue is discovered or the asset becomes malicious or deprecated.
Recommendation
Introduce a mechanism to deactivate or blacklist assets after they have been whitelisted. This could be achieved by:
- Adding a boolean field like
is_enabled
oris_blacklisted
in theAssetConfig
struct. - Updating all relevant deposit and transfer logic to respect this flag and revert if the asset is disabled.
- Optionally allowing setting
min_deposit = 0
as a soft-disable, if stricter blacklisting is not implemented.
Dummy Block Spam Can Force Premature Pruning of Legitimate Blocks
Severity
- Severity: Low
Submitted by
FrankCastle
Description
The pruning mechanism currently determines a block's prunability by comparing its ID to the last proposed block ID. This opens the door for a subtle but impactful vulnerability: an operator can spam the system with many dummy or invalid block proposals, rapidly increasing the
last_block_id
.Because pruning decisions are ID-based rather than time- or slot-based, unexpired yet valid blocks may be misclassified as prunable much earlier than intended. Although only blocks with executed effects or zero effects are immediately pruned (thanks to a safeguard), this behavior still puts legitimate blocks at risk of premature deletion in scenarios where effects are processed slowly or blocked.
This manipulation could lead to:
- Loss of valid block data before its expected lifetime.
- Unintentional halting of cross-chain message execution.
Recommendation
To prevent this type of manipulation and ensure block lifetime is preserved as intended:
-
Use Slot-Based Expiry: Define block prunability based on elapsed slots since
slot_proposed
, rather than relative block ID. -
Compare Against Last Finalized Block ID: Use the last finalized block ID instead of the last proposed block ID for pruning decisions. This avoids counting non-final, potentially malicious proposals toward pruning logic.
Incorrect freeze crumb reporting
Severity
- Severity: Low
Submitted by
m4rio
Finding Description
The
freeze
instruction correctly updates on-chain state (ctx.accounts.bridge.frozen = freeze
), but the event emitted for off-chain observers is hard-coded:crate::crumbs::Crumb::Freeze { freeze: true }
Regardless of whether the caller is freezing or un-freezing, every
Freeze
crumb reportsfreeze: true
.Recommendation
Emit the actual user-supplied value:
crate::crumbs::emit_cpi( &crate::crumbs::Crumb::Freeze { freeze }.versioned(), &ctx.accounts.program, &ctx.accounts.bridge, ctx.accounts.crumb_authority.to_account_info(), ctx.bumps.crumb_authority,);
Deposit inflation with Fee-On-Transfer tokens
Severity
- Severity: Low
Submitted by
m4rio
Finding Description
deposit_create
uses the deprecated unchecked SPL-Tokentransfer
:#[allow(deprecated)]token_interface::transfer( … authority: payer …, amount )?;
For tokens that enforce a transfer fee or burn (“fee-on-transfer”), the receiving vault (
token_account
) ends up with less thanamount
. However, the program records the fullamount
inside theUnqueuedDeposit
:*cx.accounts.deposit = UnqueuedDeposit { transfer: TransferParams { … amount } };
This will result in Inflated deposits – on-chain state claims more tokens than were actually credited to the bridge vault.
Recommendation
Switch to
transfer_checked_with_fee
so the program respects the mint’s decimals and fee behaviour of the fee.The team has acknowledged this limitation and, for the moment, whitelists only standard, zero-fee SPL tokens until full support for special behaviours is added.
Informational5 findings
Unfinished developments / TODO comments
State
- Acknowledged
Severity
- Severity: Informational
Submitted by
0xluk3
Description
The codebase contains multiple
TODO
comments indicating incomplete implementations and pending design decisions. Notable examples include unimplemented bridge transfer in bridge/src/lib.rs:103, unimplemented block approval in bridge/src/lib.rs:153, or use missing checked transfer implementation in bridge/src/instructions/withdraw.rs:162, other occurrences are present in:merkle.rs:105
lib.rs:20
instructions/initialize_bridge.rs:10
instructions/xbridge_transfer.rs
, multiple occurrencesinstructions/deposit_create.rs
, multiple occurrencesstate/types.rs
, multiple occurrences
These incomplete implementations reduce code maintainability and may lead to unexpected behavior when attempting to use partially implemented features.
Recommendation
Address each
TODO
systematically before final production-ready deployment.Typographical error in RPC interface structure
Severity
- Severity: Informational
Submitted by
0xluk3
Description
The RPC module in rpc.rs contains a struct named
SetWinwodSizeIx
which appears to be a typographical error forSetWindowSizeIx
. While this naming inconsistency doesn't affect runtime behavior, it may lead to confusion or issues when integrating with the RPC interface.Recommendation
Rename
SetWinwodSizeIx
toSetWindowSizeIx
.Missing parameter guards for challenge_consensus_threshold and challenge_period_slots
Severity
- Severity: Informational
Submitted by
m4rio
Finding Description
SetChallengeConsensusThreshold
andSetChallengePeriodSlots
allow the bridge owner to change:challenge_consensus_threshold
– number of validator challenges required to block finalization.challenge_period_slots
– duration (in slots) during which challenges are accepted.
Neither instruction validates the new value. As a result the owner can set:
Extreme value Consequence challenge_consensus_threshold = 0
Any block can be finalized immediately even if challenged, defeating the fraud-proof mechanism. challenge_consensus_threshold > u16::MAX
(via future refactor) or ≫ validator set sizeEvery block becomes unfinalizable, halting withdrawals indefinitely. challenge_period_slots = 0
Challenge window closes in the same slot; honest validators cannot react, enabling malicious block finalization. challenge_period_slots
extremely large (e.g. u64::MAX)Blocks remain “Proposed” for years, preventing finalization and liquidity exit (economic DoS). Recommendation
We agree that these values can be changed freely in emergency circumstances, but we advise against changing them often - especially if doing so could allow the previous challenged blocks to be finalized.
Centralize the challenge-period expiry check
Severity
- Severity: Informational
Submitted by
m4rio
Finding Description
The helper struct
IsChallengePeriodExpired
is defined directly insidechallenge_block.rs
. Other instructions likefinalize_block
rely on the same rule but must import or re-implement the logic manually.Recommendation
Move
IsChallengePeriodExpired
into a sharedutils
module and re-export itOrphaned DA-Fact accounts accumulate rent
Severity
- Severity: Informational
Submitted by
m4rio
Finding Description
Each call to
finalize_da_fact
creates a dedicated PDA (FactStateStorage
) for a single DA-fact and marks itFinalized
. Duringpropose_block
, the block merely checks that the referenced DA-fact is alreadyFinalized
; after the block is itself finalised and eventually pruned, the DA-fact account is never touched again.Because there is no corresponding “prune-da-fact” logic, all fact-storage PDAs persist indefinitely, even though each fact is consumed by exactly one block.
Recommendation
Add a lightweight
prune_da_fact
instruction that can be called permissionlessly once the block that consumed the fact is prunable (e.g.,block_id < last_block_id – block_lifetime
).Furthermore, consider analyzing other accounts that might need to be pruned as well once the block is pruned.