WalletConnect

WalletConnect Contracts

Cantina Security Report

Organization

@WalletConnect

Engagement Type

Cantina Reviews

Period

-

Researchers


Findings

Medium Risk

1 findings

1 fixed

0 acknowledged

Low Risk

4 findings

2 fixed

2 acknowledged

Informational

7 findings

5 fixed

2 acknowledged

Gas Optimizations

3 findings

2 fixed

1 acknowledged


Medium Risk1 finding

  1. Externally created locks can lead to DOS while claiming vesting allocation

    Severity

    Severity: Medium

    Likelihood: Medium

    ×

    Impact: Medium

    Submitted by

    Om Parikh


    Description

    If a lock is created on StakeWeight independently by transferring tokens and not via any linked vesters and same user also has vesting schedule but vested amount was not utilized for lock then, while withdrawing from vester it will revert in inequality comparison in LockedTokenStaker.handlePostClaim because it is not accounting for transferred tokens in the active lock.

    since the handlePostClaim always has to be triggered even if LockedTokenStaker is not used by beneficiary to create lock, there is no way to claim vesting given such scenario.

    Recommendation

    Locks which are created independently without using staker require funds to transferred, so that can be tallied in post claim handler. Also, relevant test for this scenario (before & after fix) should be added.

    uint256 lockedAmount = SafeCast.toUint256(lock.amount);+                uint256 transferredAmount = lock.transferredAmount;                  // Check if there's enough unlocked tokens for the claim-                if (remainingAllocation < lockedAmount + claimAmount) {+                if (remainingAllocation + transferredAmount < lockedAmount + claimAmount) {                     revert CannotClaimLockedTokens(remainingAllocation, lockedAmount, claimAmount);                 }

Low Risk4 findings

  1. Incomplete Reward Distribution When More Than 52 Weeks Elapse Between Checkpoints

    State

    Acknowledged

    Severity

    Severity: Low

    Submitted by

    slowfi


    Description

    The _checkpointToken function in StakingRewardDistributor distributes rewards proportionally across elapsed weeks since the last checkpoint. However, it limits iteration to MAX_REWARD_ITERATIONS (52). If more than 52 weeks pass between lastTokenTimestamp and the current block.timestamp, the loop exits early while lastTokenTimestamp and lastTokenBalance are updated to the current time and balance.

    As a result, any unprocessed period beyond the 52-week limit is never recorded in tokensPerWeek, and the corresponding tokens become permanently undistributable. Future checkpoint calls will compute a zero delta because lastTokenTimestamp has already advanced to the current timestamp, effectively stranding the remaining rewards in the contract balance.

    Proof of Concept

    The following test demonstrates how _checkpointToken fails to distribute rewards if more than 52 weeks elapse between checkpoints. It advances the timestamp by 60 weeks, deposits tokens, and then calls checkpointToken(). The test verifies that only the first 52 weekly buckets receive tokens while the remaining weeks stay empty, resulting in stranded undistributed rewards.

    function test_SRD_Over52Weeks_LosesRewards() external {        uint256 startWeek = stakingRewardDistributor.lastTokenTimestamp();        uint256 weeksElapsed = 60 weeks; // > MAX_REWARD_ITERATIONS (52)        uint256 amount = 1_000_000e18;
            // Fund SRD directly (bypasses transfer restrictions and approvals)        deal(address(l2wct), address(stakingRewardDistributor), amount);        vm.warp(block.timestamp + weeksElapsed);        stakingRewardDistributor.checkpointToken();
            // Assert: only first 52 buckets are filled; later ones remain zero        uint256 sumAssigned = 0;        uint256 filledWeeks = 0;        for (uint256 i = 0; i <= weeksElapsed; i += 1 weeks) {            uint256 weekTs = startWeek + i;            uint256 w = stakingRewardDistributor.tokensPerWeek(weekTs);            sumAssigned += w;            if (w > 0) filledWeeks++;        }        // Expect only 52 filled weeks        assertEq(filledWeeks, 52, "Only first 52 weeks should be filled");        // Expect stranded remainder (sumAssigned < amount)        assertLt(sumAssigned, amount, "Remainder beyond 52 weeks is stranded");
            // And internal cursors advanced as if fully processed        assertEq(            stakingRewardDistributor.lastTokenTimestamp(),            block.timestamp,            "lastTokenTimestamp incorrectly advanced to now"        );    }

    Recommendation

    Consider to prevent advancing lastTokenTimestamp beyond the last fully processed week when the iteration limit is reached. Alternatively, allocate the undistributed remainder to the final processed week to ensure all tokens are accounted for.

    If the 52-week iteration cap is kept for gas-safety reasons, consider to emit an event warning about truncated distributions and document the operational requirement to checkpoint at least once per year to prevent reward loss.

    Reown: Acknowledged. This scenario requires both (1) zero user claims and (2) zero reward injections for 52+ consecutive weeks. Given operational requirements include weekly reward injections, and any active user claiming triggers a checkpoint, this is a non-issue for a live protocol.

    Cantina managed: Acknowledged by Reown team.

  2. Reward Injection Allowed After kill() and While Paused

    Severity

    Severity: Low

    Submitted by

    slowfi


    Description

    In StakingRewardDistributor, reward injection paths are not gated by lifecycle or pause state. As a result:

    • Rewards can still be injected after the contract has been killed (i.e., when isKilled is set and Killed() has been emitted).
    • Rewards can be injected while paused, since the injection flow does not check the module’s paused state.

    This creates operational inconsistencies: a killed distributor can continue accumulating undistributed balances, and a paused distributor can still receive new rewards even though distribution/claims are expected to be halted.

    Recommendation

    Consider to enforce lifecycle and pause guards on all reward-injection entry points so that injections revert when the distributor is killed and are disallowed while paused. Document the intended policy clearly (e.g., whether injections during pause should queue or be rejected).

  3. Vesting revocation is not relayed to StakeWeight

    Severity

    Severity: Low

    Submitted by

    Om Parikh


    Description

    In case of vesting being revoked or cancelled by benefactor, this information is not relayed / checkpointed to StakeWeight for positions created via LockedTokenStaker, so weight is not updated accordingly.

    Given forceWithdrawAll can be done on StakeWeight on address-by-address basis, it possible to revoke manually. However, until this is performed it can allow voting or claiming rewards with incorrect weight during that window.

    Recommendation

    • Consider adding more documentation for benefactor role and in what cases vesting might be revoked so it is transparent for users
    • If benefactor is controlled by same party, planned efforts should be made to make sure incorrect weight is not allowed to be misused.
    • isStakeWeightPaused() check can be removed from forceWithdrawAll if deemed necessary to pause in case of vesting revocation so that admin can pause and have enough time to react for large set of addresses.
  4. Backdated Injections After Cursor Advance Cause Missed Rewards

    State

    Acknowledged

    Severity

    Severity: Low

    Submitted by

    slowfi


    Description

    StakingRewardDistributor advances weekCursorOf[user] to the current (or next) week when a user claims. If rewards are later injected for a past week that the user’s cursor has already passed, those rewards are not considered during subsequent claims for that user. Users who delayed claiming (so their cursor remained behind) can capture the backdated allocation, while users who claimed earlier cannot—creating uneven distribution and a potential incentive to avoid timely claiming.

    Proof Of Concept

    The following test demonstrates the behavior: it advances users’ cursors via claims, then injects additional rewards for a previous week. Users who had already moved their cursors forward cannot claim the backdated rewards, while a user who delayed claiming captures the extra allocation.

    function test_claim_patterns_backdated_week1_extra_favors_one_time() public {       // Grant role       vm.startPrank(users.admin);       stakingRewardDistributor.grantRole(stakingRewardDistributor.REWARD_MANAGER_ROLE(), address(this));       vm.stopPrank();
           // Prepare users and balances       address u1 = makeAddr("u1b");       address u2 = makeAddr("u2b");       address u3 = makeAddr("u3b");       deal(address(l2wct), u1, 10 ether);       deal(address(l2wct), u2, 10 ether);       deal(address(l2wct), u3, 10 ether);
           vm.prank(u1); l2wct.approve(address(stakeWeight), type(uint256).max);       vm.prank(u2); l2wct.approve(address(stakeWeight), type(uint256).max);       vm.prank(u3); l2wct.approve(address(stakeWeight), type(uint256).max);
           vm.prank(u1); stakeWeight.createPermanentLock(10 ether, 4 weeks);       vm.prank(u2); stakeWeight.createPermanentLock(10 ether, 4 weeks);       vm.prank(u3); stakeWeight.createPermanentLock(10 ether, 4 weeks);
           deal(address(l2wct), address(this), 100 ether);       l2wct.approve(address(stakingRewardDistributor), type(uint256).max);
           uint256 t0 = (block.timestamp / 1 weeks) * 1 weeks;       vm.warp(t0);
           // Base rewards: 1 ether for week1 and week2       stakingRewardDistributor.injectReward(t0 + 1 weeks, 1 ether);       stakingRewardDistributor.injectReward(t0 + 2 weeks, 1 ether);
           uint256 totalU1 = 0; uint256 totalU2 = 0; uint256 totalU3 = 0;
           // U2 mid-week1 claim (zero)       vm.warp(t0 + (1 weeks)/2); vm.prank(u2); totalU2 += stakingRewardDistributor.claim(u2);
           // End of week1 effective claim point t0+2w: U1 and U2 claim week1       vm.warp(t0 + 2 weeks); vm.prank(u1); totalU1 += stakingRewardDistributor.claim(u1);       vm.prank(u2); totalU2 += stakingRewardDistributor.claim(u2);
           // Mid of week3: backdate extra to week1 after U1/U2 have moved past       vm.warp(t0 + 2 weeks + (1 weeks)/2);       stakingRewardDistributor.injectReward(t0 + 1 weeks, 1 ether); // extra late for week1       vm.prank(u2); totalU2 += stakingRewardDistributor.claim(u2); // still zero
           // End of week2 effective claim point t0+3w: all claim       vm.warp(t0 + 3 weeks);       vm.prank(u1); totalU1 += stakingRewardDistributor.claim(u1);       vm.prank(u2); totalU2 += stakingRewardDistributor.claim(u2);       vm.prank(u3); totalU3 += stakingRewardDistributor.claim(u3);
           // u3 should be strictly greater; u1 and u2 should match each other       assertEq(totalU1, totalU2, "u1 and u2 (who claimed week1 at end) should match");       assertGt(totalU3, totalU1, "one-time claimer should capture backdated week1 extra");}

    Recommendation

    Consider to disallow backdated reward injections (e.g., require timestamp >= currentWeek) or to handle backdated injections by re-opening past weeks for all users (e.g., allowing a bounded rescan window) so that users who claimed earlier do not miss subsequent allocations. If keeping the current behavior for gas/operational reasons, consider to document the behavior and enforce from the UI that claims are only enabled when they would collect rewards.

    Reown: Acknowledged, we will still keep the current functionality in case we missed injecting a week's rewards, as our front end is blocking to call the claim function if it will not return rewards. I see more harm on not being able to do this for our regular users in case we miss injecting rewards, than the potentially outlined case.

    Cantina: Acknowledged by Reown team.

Informational7 findings

  1. Unreachable Branch in _checkpointToken Leads to Dead Code

    Severity

    Severity: Informational

    Submitted by

    slowfi


    Description

    In StakingRewardDistributor._checkpointToken, the branch:

    if (deltaSinceLastTimestamp == 0 && nextWeekCursor == timeCursor) {    tokensPerWeek[thisWeekCursor] = tokensPerWeek[thisWeekCursor] + toDistribute;}

    is unreachable. By construction:

    • deltaSinceLastTimestamp == 0 implies timeCursor == block.timestamp.
    • thisWeekCursor = _timestampToFloorWeek(timeCursor).
    • nextWeekCursor = thisWeekCursor + 1 weeks.

    For any timeCursor, nextWeekCursor equals floorWeek(timeCursor) + 1 weeks, which is strictly greater than timeCursor (if timeCursor is on a week boundary, floorWeek(timeCursor) == timeCursor and nextWeekCursor == timeCursor + 1 weeks). Therefore, nextWeekCursor == timeCursor can never hold, making the branch dead code.

    Impact is mainly maintainability and minor cognitive overhead (readers may assume a scenario that cannot occur). In some compilers/optimizations this also prevents the block from being optimized away at the source level.

    Proof of Concept

    Add the following property-style assertion around the computation to demonstrate impossibility (for review/testing only):

    // Pseudo-instrumentation mirroring the function’s variablesuint256 timeCursor = lastTokenTimestamp;uint256 deltaSinceLastTimestamp = block.timestamp - timeCursor;uint256 thisWeekCursor = _timestampToFloorWeek(timeCursor);uint256 nextWeekCursor = thisWeekCursor + 1 weeks;
    // When delta == 0 (e.g., call twice within same block), the equality cannot hold.if (deltaSinceLastTimestamp == 0) {    assert(nextWeekCursor != timeCursor); // always true}

    You can also reproduce deltaSinceLastTimestamp == 0 by calling checkpointToken() twice in the same block; the assertion will never fail.

    Recommendation

    Consider to remove the unreachable branch or replace it with the intended edge-case handling (if any). If the aim was to “dump remainder into current week when no time elapsed,” the correct condition would be solely deltaSinceLastTimestamp == 0, without comparing nextWeekCursor and timeCursor. Otherwise, delete the block and add a clarifying comment so future readers don’t infer a non-existent path.

  2. Undistributed Injected Rewards Remain Stuck Until kill()

    Severity

    Severity: Informational

    Submitted by

    slowfi


    Description

    StakingRewardDistributor retains reward tokens internally and provides no mechanism to withdraw undistributed balances other than kill(), which forwards all remaining tokens to emergencyReturn. Tokens present in the contract but not mapped into tokensPerWeek are not claimable and remain idle until the contract is killed. This does not result in a direct loss of funds but can cause temporary token stagnation.

    Recommendation

    Consider to document this behavior clearly so operators are aware that undistributed rewards remain locked within the contract until it is explicitly killed.

  3. Consistent Casting With SafeCast

    Severity

    Severity: Informational

    Submitted by

    slowfi


    Description

    The StakeWeight.sol contract casts a signed integer to an unsigned integer directly:

    uint256 totalAmount = uint256(int256(lock.amount));

    This direct cast bypasses the type-safety provided by SafeCast and can lead to unexpected behavior if the signed value were ever negative (even if such condition is currently prevented by logic elsewhere). Other sections of the codebase already use SafeCast for similar conversions, ensuring consistency and safety.

    Recommendation

    Consider to replace the manual cast with SafeCast (e.g., SafeCast.toUint256(int256(lock.amount))) for consistency and defensive safety. This maintains uniform coding standards across the codebase and provides a clearer guarantee against unintended sign conversion.

  4. Missing _disableInitializers in constructor

    Severity

    Severity: Informational

    Submitted by

    Om Parikh


    Description

    Currently whenever an new implementation contract is deployed for StakeWeight, It can be initialized by anyone with unwanted values. This doesn't posses huge risk currently but should be prevented as a safety measure to avoid any issues if logic is upgraded in future which may increase risk.

    Recommendation

    _disableInitializers() should be added in StakeWeight to prevent unwanted initialization (or future reinitialization) on implementation contract.

  5. Ambiguous offchain guarantees for MerkleVester

    State

    Acknowledged

    Severity

    Severity: Informational

    Submitted by

    Om Parikh


    Description

    There are certain things for MerkleVester which should hold true and guaranteed by offchain means as it is not checked on-chain:

    • An instance of MerkleVester should only whitelist linked LockedTokenStaker as post claim handler at all times. If more addresses andaddress(0) or no address are whitelisted, LockedTokenStaker callback can be bypassed leading to missed accounting. This also implies a given MerkleVester should be exactly linked with one LockedTokenStaker
    • Intervals and calendar should have different allocation ids
    • A given beneficiary should have no more than one vesting schedule per MerkleVester . consider two allocations A1 (interval), A2 (calendar) on same vester where A2.originalBenificary = A1.originalBenificary; A2.totalAllocation > A1.totalAllocation; A2.id != A1.id, if lock is created from A1 and when benificiary tries to withdraw A2, it will revert.

    Recommendation

    • Try implementing them on-chain wherever possible
    • Consider documenting them to relevant parties wherever needed
  6. [Appendix] Upgrade safety and storage integrity

    Severity

    Severity: Informational

    Submitted by

    Om Parikh


    Description

    Currently, mainnet for tests ensures storage is preserved and works as expected after upgrade. However, the test only touches certain storage slots. Any collision or unwanted behaviour is not identified for those slots.

    As, compiler parameters were changed from evm:paris + runs:10_000 -> evm:cancun + runs:200 and storage layout is at slightly different location from ERC-7201, A sort of formal correctness can be obtained via dumping address account storage from op-reth / op-geth at current block and matching all slots after upgrade on foundry / tenderly fork. The runtime for comparison should be significantly less as it doesn't involve any network / rpc calls.

    A sparse sample set for distinct user address and old blocks / epochs should be picked to check after upgrading, all historical API returns same result and new logic has not impacted any old behaviour.

    While performinig upgrade on mainnet, relevant contracts should be paused or upgrade process should be atomic to avoid any synchronization issues.

  7. Missing beneficiary check in post claim handler

    State

    Acknowledged

    Severity

    Severity: Informational

    Submitted by

    Om Parikh


    Description

    createLockFor and increaseLockAmountFor checks for msg.sender being beneficiary

    if (allocation.originalBeneficiary != msg.sender) {            revert InvalidCaller();        }

    However, it is not validated in handlePostClaim or in MerkleVester. Currently, the impact is trivial if someone else calls it passing leaf data as it will revert if lock is not expired or if expired, it will send funds to beneficiary's address. Worst case impact being if beneficiary wanted to change address to receive on new address but it can be forced to receive on same address.

    Recommendation

    • Consider adding check in MerkleVester for msg.sender or in post claim handler for tx.origin

Gas Optimizations3 findings

  1. Immutable Variable Optimization

    State

    Acknowledged

    Severity

    Severity: Gas optimization

    Submitted by

    slowfi


    Description

    Several global state variables across the contracts are assigned only once during initialize() and have no setters that modify them afterward. These variables behave as constants for the lifetime of the contract and can be declared as immutable on the implementation contract.

    Using immutable variables stores their values directly in the contract bytecode, allowing access through a single PUSH operation instead of an SLOAD. This optimization reduces gas costs and can slightly decrease stack pressure in functions that frequently access these variables.

    This change is only safe if the deployment model does not rely on proxy initialization (i.e., the values are passed in the constructor or fixed at implementation deploy time). In a proxy-based design where each proxy requires unique initialization parameters, these variables must remain in storage.

    The following variables qualify for conversion to immutable under a non-proxy or single-instance deployment model:

    • StakingRewardDistributor.sol: config, emergencyReturn and startWeekCursor
    • StakeWeight.sol: config
    • LockedTokenStaker.sol: vesterContract and config

    These variables are only set in the initializer and remain unchanged afterward. Variables related to roles or governance (e.g., AccessControl roles) are mutable and therefore excluded.

    Recommendation

    Consider to declare the following variables as immutable if the deployment model allows setting them via constructor arguments rather than an initializer:

    • config, emergencyReturn, and startWeekCursor in StakingRewardDistributor
    • config in StakeWeight
    • vesterContract and config in LockedTokenStaker

    If upgradeable proxies will be used, keep them as storage variables but consider caching frequently accessed configuration fields into local variables in gas-critical functions to minimize repeated SLOAD operations.

    Reown: These contracts are deployed behind TransparentUpgradeableProxy and rely on initialize() to set per-instance state. Making config, emergencyReturn, or vesterContract immutable would:

    Break the proxy pattern: Immutables are set in the implementation constructor and baked into bytecode. Proxy storage is only written during initialize() via delegatecall, not during the constructor.

    Prevent multi-instance deployments: We deploy multiple LockedTokenStaker proxies (reown, walletconnect, backers) pointing to the same implementation but with different vesterContract addresses. If vesterContract were immutable, all proxies would share the same value.

    Complicate upgrades: Each upgrade would require a new implementation with different constructor arguments rather than simply passing different init parameters to the proxy

    Cantina managed: Acknowledged by Reown team.

  2. Redundant Check for Active Permanent Lock

    Severity

    Severity: Gas optimization

    Submitted by

    slowfi


    Description

    In StakeWeight::_withdrawExpiredLock, the line

    if (s.isPermanent[user]) revert LockStillActive(type(uint256).max);

    is redundant because _withdrawAll which is called immediately afterwards, already performs the same validation. The repeated check does not affect correctness but slightly increases bytecode size and gas usage during execution.

    Recommendation

    Consider to remove the redundant isPermanent[user] check from _withdrawExpiredLock to simplify the logic and reduce gas cost, since the condition is already enforced by _withdrawAll.

  3. Missing Reusable Pause Modifier Across Contracts

    Severity

    Severity: Gas optimization

    Submitted by

    slowfi


    Description

    Contract StakeWeight perform the pause check inline using statements such as:

    if (Pauser(config.getPauser()).isStakingRewardDistributorPaused()) revert Paused();

    This logic independently, which increases code duplication, slightly raises stack usage, and makes the pausing mechanism harder to maintain if its interface evolves.

    As mentioned in Issue #1 Immutable Variable Optimization, making config immutable allows implementing efficient reusable modifiers that perform these checks without redundant storage reads.

    Recommendation

    Consider to introduce shared pause-check modifiers (e.g., whenNotPaused) or internal helper functions that reference the immutable config variable. This would unify pause enforcement logic across all modules, improve maintainability, and slightly reduce gas and stack overhead.