Organization
- @sablier-labs
Engagement Type
Cantina Reviews
Period
-
Repositories
Findings
Medium Risk
2 findings
2 fixed
0 acknowledged
Low Risk
10 findings
5 fixed
5 acknowledged
Informational
13 findings
12 fixed
1 acknowledged
Medium Risk2 findings
Redistribution rewards can lead to protocol insolvency when AGGREGATE_AMOUNT is misconfigured
Severity
- Severity: Medium
≈
Likelihood: Low×
Impact: High Submitted by
0xRajeev
Description
In
SablierMerkleVCA, redistribution rewards are calculated usingAGGREGATE_AMOUNTas a key parameter in the denominator. Because the Merkle tree is opaque on-chain, the contract cannot verify thatAGGREGATE_AMOUNTequals the actual sum of all leaf allocations. WhenAGGREGATE_AMOUNTis accidentally set lower than the true total, the denominator in_calculateRedistributionRewards(...)becomes artificially small, inflating individual rewards beyond the forgone pool.There is no cap on cumulative rewards distributed, so the contract can pay out more in redistribution rewards than was ever forgone, draining tokens meant for other recipients' base claims.
Example scenario: Four recipients in the Merkle tree: Alice (100), Bob (100), Carol (100), Dave (100). True total = 400 but AGGREGATE_AMOUNT is mistakenly set to 250. Redistribution is enabled. Vesting runs from T0 to T100.
- Alice claims at T50:
claimAmount= 50,forgoneAmount= 50.totalForgoneAmount= 50,_fullAmountAllocatedToEarlyClaimers= 100. - Bob claims at T50:
claimAmount= 50,forgoneAmount= 50.totalForgoneAmount= 100,_fullAmountAllocatedToEarlyClaimers= 200. - Carol claims after T100 (vesting ended):
fullAmountAllocatedToRemainingClaimers= 250 - 200 = 50.rewards= (100 * 100) / 50 = 200.transferAmount= 100 + 200 = 300. Carol receives 200 in rewards, which is 2x the entire forgone pool of 100. - Dave claims after T100:
rewards= (100 * 100) / 50 = 200.transferAmount= 100 + 200 = 300. But the contract only held 400 tokens total. Alice (50) + Bob (50) + Carol (300) = 400 already spent. Dave's safeTransfer reverts and he cannot claim at all.
With the correct
AGGREGATE_AMOUNT= 400, the denominator would be 400 - 200 = 200, giving each late claimer (100 * 100) / 200 = 50 in rewards, totaling exactly 100 and matchingtotalForgoneAmount.Recommendation:
Consider one of the following mitigations:
- Enforcing a cumulative reward cap: Track total rewards distributed via a
totalRedistributedcounter and enforce that they never exceedtotalForgoneAmount. In_postProcessClaim(...), after computingrewardAmount, cap it to the remaining pool with, for example:
uint128 remainingPool = totalForgoneAmount - totalRedistributed;if (rewardAmount > remainingPool) { rewardAmount = remainingPool;}totalRedistributed += rewardAmount;This prevents insolvency regardless of
AGGREGATE_AMOUNTand allows Dave to claim his base amount without rewards in the example above. However, it does not guarantee fairness across late claimers since earlier late claimers may consume a disproportionate share of the pool before the cap kicks in.- Two-phase reward distribution: Separate claiming from reward collection. In phase 1, late claimers receive their
fullAmountand their allocation is recorded. In phase 2, a separateclaimRedistributionRewards(...)function distributes from the forgone pool using the actual tracked total of late claimer allocations as the denominator rather than the potentially incorrectAGGREGATE_AMOUNT. This removes the dependency onAGGREGATE_AMOUNTentirely for reward calculations, making the distribution correct by construction. The trade-off is requiring recipients to make a second transaction to collect redistribution rewards.
Partial Withdrawals from LPG Streams Can Permanently Brick Stream State
Severity
- Severity: Medium
Submitted by
Arno
Description
The
withdrawfunction does not enforce all or nothing withdrawal semantics for LPG (Lockup Price Gated) streams, despitecalculateStreamedAmountLPGusing a binary unlock model that returns either the fulldepositedamount or0:Because
withdrawpermits anyamount <= withdrawableAmount, a recipient can withdraw a partial amount (even 1 wei) during a temporary price spike. If the oracle price subsequently drops below the target,streamedAmountreverts to0whileamounts.withdrawnis non zero. This causes an underflow in_withdrawableAmountOf:The same underflow occurs in
_cancelThe result is a bricked stream: both
withdrawandcancelrevert until either the price rises back above the target orendTimeis reached. This can be exploited intentionally a recipient performs a 1 wei dust withdrawal during a price spike to permanently disable the sender's cancel ability at near zero cost or triggered accidentally by a legitimate partial withdrawal.Example scenario (100 AAVE stream, target price $200):
- Price spikes to $205 stream unlocks,
streamedAmount = 100 AAVE - Recipient withdraws 1 wei
amounts.withdrawn = 1 - Price drops to $180
streamedAmountreturns to0 _withdrawableAmountOfcomputes0 - 1and underflows_cancelcomputesrecipientAmount = 0 - 1and underflows- Stream is stuck: sender cannot cancel, recipient cannot withdraw, funds are locked until the next price spike or
endTime
Recommendation
Enforce all or nothing withdrawal semantics for LPG streams in the
withdrawfunction. If the stream model isLOCKUP_PRICE_GATED, require thatamount == withdrawableAmount(i.e., the full deposited amount minus any prior withdrawals). This aligns the withdrawal behavior with the binary nature of the LPG streaming model and prevents the invariantstreamedAmount >= amounts.withdrawnfrom being violated.
Low Risk10 findings
Claim consumed without token transfer if MerkleExecute target has fallback function and incorrect selector
Severity
- Severity: Low
Submitted by
0xRajeev
Description
MerkleExecuteenables an airdrop distribution model where eligible users claim tokens and immediately execute a call on a target contract, which is useful for staking and lending pool deposits.In
SablierMerkleExecute.claimAndExecute, the contract approves the target, calls it with the configured selector and user-provided arguments, and then revokes the allowance. If the campaign creator configures an incorrect selector that the target does not implement and the target contract has afallback()function, the low-levelTARGET.call(callData)succeeds silently via the fallback without consuming the approved tokens. The claim bitmap is already set in_preProcessClaim(...)and so the recipient's claim index is permanently consumed. The tokens remain in the campaign contract but the recipient cannot claim again due to theSablierMerkleBase_IndexClaimedcheck.Because the selector is an immutable campaign parameter, all claims in the campaign would be affected and result in every recipient losing their claim and fee without receiving tokens.
Recommendation
Consider verifying that the campaign's token balance decreased by the expected amount after the external call. If the target did not pull the tokens, revert to prevent the claim from being wasted.
Sender can front-run oracle updates to cancel price-gated streams
State
- Acknowledged
Severity
- Severity: Low
Submitted by
0xRajeev
Description
Lockup Price-Gated (LPG) streams are an all-or-nothing vesting model where the recipient can withdraw the full deposited amount only when the oracle price reaches or exceeds a target price, or when the stream's end time is reached. Until either condition is met, the streamed amount is zero.
With
cancelable=true, the sender can monitor the mempool for oracle price update transactions. When the sender observes an update that would push the price to or above the target price (unlocking the full deposit for the recipient), they can front-run the oracle update with a cancel call. Since_streamedAmountOf(...)returns 0 when the oracle price is below the target, the cancel refunds the entire deposited amount to the sender, denying the recipient their payout. This can be repeated each time the price approaches the target, effectively allowing the sender to never pay out.Recommendation
Consider one or both of the following mitigations:
- Add a cancel delay/timelock for LPG streams so that a cancel request takes effect after N blocks, giving the oracle time to update and the recipient time to withdraw if the price crosses the target during the delay.
- Allow LPG stream senders to cancel only within an initial grace period (e.g., shortly after creation) to recover from configuration mistakes, after which the stream becomes non-cancelable.
Sablier
We do acknowledge the finding however have decided to not take any action at this moment partially because of contract size issue that we have in Lockup. And partially because an LPG stream requires some degree of trust between the sender and the recipient. If the sender wants to discontinue the stream, they can cancel it without needing to front run the oracle updates. Therefore, the event outlined in the finding is highly improbable.
Cantina
Acknowledged.
Arbitrary selectorArguments in MerkleExecute claims allow claimants to control external call behavior
State
- Acknowledged
Severity
- Severity: Low
Submitted by
0xRajeev
Description
SablierMerkleExecuteis designed to enable claim-and-execute campaigns where claimed tokens are automatically deployed into DeFi protocols such as staking or lending platforms. InclaimAndExecute(...), theselectorArgumentsparameter is entirely user-supplied and not included in the Merkle proof verification. While the claim index, recipient, and amount are validated against the Merkle tree, the arguments passed to the target contract are under full control of the claimant (or the UI constructing the transaction). The contract concatenates the immutableSELECTORwith the arbitraryselectorArgumentsand executes a low-level call on the target:bytes memory callData = abi.encodePacked(SELECTOR, selectorArguments);(bool success, bytes memory returnData) = TARGET.call(callData);The
selectorArgumentsparameter being entirely user-supplied and unverified is a risk toTARGET/campaign from the claimer. Specific risks depend on what exactly the target does with claimer funds inTARGET.SELECTORand its logic. Some hypothetical examples include:- Lock period bypass: Claimers set lock duration to zero/minimum, immediately unstake and sell -> Campaign creator's token-locking incentive alignment is completely defeated.
- Pool/strategy redirection: Claimers route tokens to a different pool, vault, or strategy than intended -> Undermines TVL goals, liquidity bootstrapping or protocol-specific staking the campaign was designed to achieve.
- Beneficiary redirection (phishing): Attacker constructs a transaction with their own address as the staking beneficiary, tricks the legitimate claimer into signing it -> Attacker receives the staked position.
- Beneficiary redirection (intentional): Claimer deliberately stakes on behalf of a different address they control (e.g., a contract that can immediately unstake) -> Used to circumvent restrictions that the staking contract applies per-address.
- Callback/data injection: If
TARGETfunction accepts a bytes data parameter forwarded to hooks or downstream contracts then claimers control arbitrary callback data -> Could trigger unintended code paths, reentrancy, or manipulate internal accounting. - Extreme parameter values: Claimers pass boundary values (zero,
type(uint256).max, etc.) for numerical parameters -> Could exploit edge cases in the staking contract's math, create positions that are hard to liquidate/unwind, or cause overflow/underflow in poorly written targets.
Recommendation
Consider:
- Including the expected
selectorArguments(or a hash of them) in the Merkle leaf so that the proof verification ensures each claimant can only use the arguments specified by the campaign creator. This binds the external call parameters to the Merkle tree, removing claimant control over the target interaction. - Validating critical fields within
selectorArgumentsat known ABI offsets using a campaign-provided hook function. - Validating it in the UI if onchain verification is not practical.
Sablier
We decided not to make any changes because we will validate it in the UI, and adding it would complicate the integration for others
Cantina
Acknowledged.
Sender-chosen oracle in LPG streams is risky and allows unilateral control over vested amount
Severity
- Severity: Low
Submitted by
0xRajeev
Description
The LPG model introduces an oracle-dependent vesting calculation where
calculateStreamedAmountLPG(...)returns either 0 (nothing vested) or the full deposit (everything vested) based on the oracle price. The oracle address is chosen entirely by the sender at creation time. WhilevalidateOracle(...)checks that the oracle implementsAggregatorV3Interface, returns 8 decimals and reports a positive price, it does not verify that the oracle is trustworthy, immutable or reports the price of the deposited token.The protocol's
SECURITY.mdacknowledges this trust assumption: "In case of price-gated streams, there should be a certain level of trust between the sender and the recipient regarding the oracle address being used. For example, when a price-gated stream is created, it is not checked if the oracle returns the price for the deposited token or some other token." However, the degree of trust required is significantly understated. The risk extends beyond oracle relevance to full sender control over vesting outcomes.A malicious sender can deploy a custom contract that passes all validation checks but has an owner-controlled
setPrice(...)function. This gives the sender the ability to:- Cancel and reclaim 100% of deposited tokens at any time by setting the oracle price to 0 or negative, causing
safeOraclePrice(...)to return 0, which makesstreamedAmount = 0andrefundableAmount = deposited. This is a loss-of-funds vector. - Prevent early unlock indefinitely by keeping the oracle price below the target, forcing the recipient to wait until
endTime. This lack of access to funds can have significant financial implications if the end time is far in the future (e.g., months or years), as the recipient's capital is effectively locked with no recourse while the sender retains the option to cancel at any point. This is a DoS/griefing vector.
This fundamentally differs from other Lockup models (LL, LD, LT) where the streamed amount is deterministic based on elapsed time and cannot be manipulated by the sender after creation. The LPG recipient bears all risk of the sender's oracle choice.
Recommendation
Consider the following mitigations:
- Make LPG streams non-cancelable by default or enforce non-cancelability after a short grace period, since a cancelable LPG stream with a sender-controlled oracle allows the sender to reclaim 100% of funds at any time.
- If the protocol does not want to maintain an oracle allowlist, strengthen the trust assumption in
SECURITY.mdto explicitly warn that a malicious sender can use a custom oracle to cancel and reclaim the full deposit, not just that the oracle may report the price of a different token. - Surface the oracle address prominently in the UI so that recipients can independently verify the oracle before accepting an LPG stream.
Sablier
As a result of the finding, we have added relevant notes in the interface of
ISablierLockupPriceGatedas well as the SECURITY.md file.Cantina
Fixed as recommended in (2).
Absence of oracle staleness check in LPG streams can unlock or lock tokens based on outdated prices
State
- Acknowledged
Severity
- Severity: Low
Submitted by
0xRajeev
Description
SafeOracle.safeOraclePrice(...)does not perform any staleness check on the oracle'supdatedAttimestamp, as noted in the code: "Because users may not always use Chainlink oracles, we do not check for the staleness of the price." The rationale is that non-Chainlink oracles (e.g., Pyth, Redstone or custom oracles behind anAggregatorV3Interfaceadapter) may have different update frequencies, making a universal staleness threshold inappropriate.However, the complete absence of staleness protection causes two issues in LPG streams:
- Stale price above target unlocks all tokens prematurely: If the oracle last reported a price above the target but has since gone stale,
calculateStreamedAmountLPG(...)returns the full deposit as vested. The recipient can withdraw all tokens even if the real price has since dropped below the target. - Stale price below target locks tokens incorrectly: If the oracle last reported a price below the target but has since gone stale,
calculateStreamedAmountLPG(...)returns 0. The recipient's tokens remain locked even if the real price has since exceeded the target, denying them access to funds they should be entitled to.
This is notably inconsistent with
_convertUSDFeeToWei(...)inSablierComptroller, which enforces a 24-hour staleness threshold for the same type of oracle data used in fee calculations, a lower-stakes context.Recommendation
Consider:
- Allowing the LPG stream creator to specify a
maxStalenessparameter at creation time alongside the oracle address.calculateStreamedAmountLPG(...)would then check thatblock.timestamp - updatedAt <= maxStalenessbefore using the price, returning 0 if the price is stale. This avoids hardcoding a universal threshold inSafeOraclewhile giving each stream creator the flexibility to set an appropriate staleness bound for their chosen oracle's update frequency. Given that the oracle is already a trusted input chosen by the sender, oracle staleness is part of that broader trust risk. However, a configurable staleness bound would provide a concrete, enforceable guardrail within the otherwise trust-dependent oracle model. - Adding a mechanism to allow the recipient (or both parties via mutual agreement) to force-settle the stream after a prolonged period of oracle inactivity to mitigate the issue of permanent oracle failure.
Sablier
While we agree it’s a best practice to check for price staleness (and we do have this check in the comptroller, where no fee is charged if the price hasn’t been updated in 24 hours), adding a staleness check to the LPG streamed amount calculation doesn’t benefit the recipient because of the following scenario: if the target price is reached (and remains above the target) and the oracle becomes stale, users can’t withdraw funds until the end time
Cantina
Acknowledged.
Missing L2 sequencer uptime check allows stale oracle data exploitation during sequencer downtime
State
- Acknowledged
Severity
- Severity: Low
Submitted by
0xRajeev
Description
For L2 deployments,
SafeOracledoes not check if the L2 sequencer is up or down, as recommended by Chainlink's L2 sequencer uptime feed documentation. A staleness check usingupdatedAtalone is not sufficient on L2s and does not replace the need for a sequencer uptime check. When the sequencer is down:- The price feed on L2 stops updating and the updatedAt timestamp freezes.
- Sophisticated users can still submit transactions via L1.
- These L1-submitted transactions execute on L2 before the sequencer's backlog when it comes back up.
Without a sequencer uptime check, the protocol may use stale oracle data from L1-submitted transactions, leading to incorrect LPG stream claims/cancels or incorrect fee calculations in
_convertUSDFeeToWei(...). This could be actively exploited by advanced users who take advantage of any price movement that occurs during the downtime or when the sequencer has just recovered.For example, if the oracle reported a price above the LPG target before the sequencer went down and the real price has since dropped, a recipient could submit a withdrawal via L1 to claim the full deposit using the stale above-target price. Conversely, a sender could cancel a stream using a stale below-target price even though the real price has exceeded the target.
This is not a theoretical concern. For example, Base operates with a single centralized sequencer that has had two downtime incidents so far: September 2023 (~43 minutes) and August 2025 (~33 minutes).
Recommendation
Consider integrating Chainlink's L2 sequencer uptime feed into SafeOracle. Before using the oracle price, the library should check the sequencer uptime feed and return 0 (or revert in
validateOracle(...)) if the sequencer is down or has only recently come back up, allowing a grace period for price feeds to refresh. This check should apply to bothsafeOraclePrice(...)(used in LPG streams) and_convertUSDFeeToWei(...)(used in fee calculations).Sablier
After careful review, we've decided not to implement the sequencer uptime check for the following reasons:
- LPG Stream Implementing this would create asymmetric outcomes that could favor either party. For example: If the price goes above the target and the sequencer goes down, the check would block recipients from withdrawing their funds Similarly, if the price falls below the target during downtime, senders would be unable to cancel Since senders typically can always cancel the stream, we believe that in scenarios where the price exceeds the target during sequencer downtime, it's more reasonable to prioritize the recipient's ability to withdraw.
- Convert USD Fee Function While the sequencer uptime check aims to prevent fee charges during downtime, the practical impact is very limited since it's only used for fee calculations of the order of $1–2. Even in a worst-case real-world scenario, the fee variance would be negligible, resulting from a few hours of price change.
- Limited L2 Support Chainlink's sequencer uptime feed is not available across all major L2s we support, including Optimism and Linea. Implementing a partial solution would create inconsistent behavior across our deployments. Plus, it will lead to maintaining a separate list of contracts because the price feed contract does not have any function for this. And we do not find it worth the impact it would have on the protocol.
Cantina
Acknowledged.
Attestation signatures have no expiry, preventing revocation of individual recipients
Severity
- Severity: Low
Submitted by
0xRajeev
Description
The attestation signature verified in
SablierMerkleSignature._verifyAttestationSignature(...)is a permanent credential with no expiry or nonce. Once the attestor signs an attestation for a recipient, it remains valid for the lifetime of the campaign. However, KYC/KYB verification, which is the primary intended use case for attestations, is inherently time-bound and revocable. A recipient who passes KYC today:- Could be sanctioned before the campaign ends.
- Their KYC could expire before the campaign ends.
- Their jurisdiction could become restricted before the campaign ends.
Yet their attestation remains valid indefinitely. The only revocation mechanism is rotating the attestor key via setAttestor, which is all-or-nothing and invalidates attestations for every unclaimed recipient in the campaign (or all campaigns using the comptroller's global attestor).
Recommendation
Consider one or more of the following mitigations:
- Adding an expiry field to the attestation struct so that the attestor can issue time-bound attestations that must be refreshed periodically.
- Introducing a per-recipient nonce so that individual attestations can be invalidated by incrementing the nonce without affecting other recipients.
- Adding an on-chain blocklist maintained by the campaign admin or attestor that is checked during
claimViaAttestation(...), allowing targeted revocation of compromised or newly sanctioned recipients.
Enabling redistribution after vesting ends creates unfair ordering-dependent reward distribution
Severity
- Severity: Low
Submitted by
0xRajeev
Description
In
SablierMerkleVCA, the admin can callenableRedistribution(...)at any time, including afterVESTING_END_TIME. Recipients who already claimed theirfullAmountbefore redistribution was enabled receive zero rewards, while subsequent claimers get the full proportional share of forgone tokens. This creates an ordering-dependent unfairness where identically-situated late claimers receive different payouts based solely on whether they claimed before or after the admin enabled redistribution.The admin can also strategically time this to benefit specific addresses. For example, after observing that certain recipients have already claimed post-vesting without rewards, the admin could enable redistribution to concentrate the forgone pool among the remaining unclaimed recipients.
Recommendation
Consider restricting
enableRedistribution(...)toblock.timestamp < VESTING_END_TIME. This ensures redistribution is committed to before any late claimer (post-vesting) can claim. Since all late claims happen afterVESTING_END_TIME, no late claimer can be disadvantaged by the timing ofenableRedistribution(...). The admin still has flexibility to watch forgone amounts accumulate during the vesting period before deciding whether to enable, but cannot time the enabling post-vesting after seeing which late claimers have already claimed without rewards.Undistributed redistribution rewards are locked in the contract when recipients never claim
State
- Acknowledged
Severity
- Severity: Low
Submitted by
0xRajeev
Description
In
SablierMerkleVCA, the redistribution reward denominatorAGGREGATE_AMOUNT - _fullAmountAllocatedToEarlyClaimersincludes allocations for all remaining non-early claimers, both those who eventually claim late and those who never claim at all. When recipients never claim, their share of the forgone pool is never distributed but remains locked in the contract, recoverable only by the admin via clawback within the allowed window.Example scenario: Three recipients: Alice (100), Bob (100), Carol (100).
AGGREGATE_AMOUNT= 300. Redistribution is enabled. Alice claims early at the midpoint, forgoing 50 tokens.totalForgoneAmount= 50,_fullAmountAllocatedToEarlyClaimers= 100.fullAmountAllocatedToRemainingClaimers= 300 - 100 = 200, because the contract assumes both Bob and Carol will claim.- Bob claims late: rewards = (100 * 50) / 200 = 25. Bob receives 25 out of the 50 forgone tokens.
- Carol never claims. Her proportional share (25 tokens) remains in the contract undistributed and inaccessible outside the clawback window.
This is a capital efficiency issue where the protocol conservatively reserves Carol's share in case she claims later, but if she never does, those tokens go to nobody rather than being redistributed to Bob.
Recommendation
Consider implementing a two-phase reward distribution where claiming and reward collection are separated. In phase 1, late claimers receive their
fullAmountand their allocation is recorded. In phase 2, a separateclaimRedistributionRewards(...)function distributes from the forgone pool using the actual tracked total of late claimer registrations as the denominator rather thanAGGREGATE_AMOUNT. Because the denominator is derived from actual late claimer registrations, only recipients who claim are included. In the example above, Bob would be the sole late claimer with a denominator of 100, receiving (100 * 50) / 100 = 50, which is the full forgone pool. The trade-off is requiring recipients to make a second transaction to collect redistribution rewards.For example, the below flow ensures complete proportional fairness without any ordering bias:
- Campaign starts (
CAMPAIGN_START_TIME) - claims open. - Vesting period (
VESTING_START_TIME→VESTING_END_TIME) - early claimers get partial amounts, pay fees,totalForgoneAmountaccumulates. - Post-vesting (
VESTING_END_TIME→REWARD_EXPIRATION) - late claimers callclaimTo(...), receivefullAmount, pay fees, their allocation is tracked. - Reward expiration (
REWARD_EXPIRATION) - no more phase 1 claims. Phase 2 opens. Denominator is frozen. - Reward collection (
REWARD_EXPIRATION→EXPIRATION) - only late claimers can callclaimRedistributionRewards(...), receive proportional share, pay no fee. - Campaign expires (
EXPIRATION) - no more reward claims. Admin can clawback remaining balance.
The UX trade-off here is making everyone execute two transactions for fair claims+rewards versus letting them execute one transaction with potential ordering bias.
Sablier
Given our time constraints for the UI implementation, which requires following the same flow as the other campaigns, we decided not to implement it now and to consider it in future versions.
Cantina
Acknowledged.
Model enum reordering changes integer encoding, potentially breaking offchain integrations
Severity
- Severity: Low
Submitted by
0xRajeev
Description
The
Lockup.Modelenum was reordered from{LOCKUP_LINEAR=0, LOCKUP_DYNAMIC=1, LOCKUP_TRANCHED=2}to{LOCKUP_DYNAMIC=0, LOCKUP_LINEAR=1, LOCKUP_PRICE_GATED=2, LOCKUP_TRANCHED=3}while adding support for LPG streams. Because Solidity stores enums as integers, this changes the ABI encoding in events, return values and storage. Any existing offchain tooling (subgraphs, indexers, frontends, analytics dashboards) that maps integer values to model names will misclassify streams from new deployments. For example, a new Linear stream (now encoded as 1) would be interpreted as Dynamic (previously encoded as 1).Recommendation
Consider appending new enum values at the end rather than reordering existing ones. This preserves backward compatibility for off-chain tooling:
{LOCKUP_LINEAR=0, LOCKUP_DYNAMIC=1, LOCKUP_TRANCHED=2, LOCKUP_PRICE_GATED=3}.
Informational13 findings
Missing batch support for Lockup Price-Gated streams
Severity
- Severity: Informational
Submitted by
0xRajeev
Description
SablierBatchLockupprovides batch creation methods for all existing Lockup models (Dynamic, Linear, and Tranched) but does not include corresponding methods for the newly introduced Price-Gated (LPG) model. The contract hascreateWithDurationsLD,createWithTimestampsLD,createWithDurationsLL,createWithTimestampsLL,createWithDurationsLTandcreateWithTimestampsLT, but nocreateWithDurationsLPGorcreateWithTimestampsLPG.While LPG streams can still be created individually via
SablierLockup, users who need to create multiple LPG streams in a single transaction (e.g., a company setting up price-gated vesting for multiple employees) must make separate calls, increasing gas costs and reducing UX consistency.Recommendation
Consider adding
createWithDurationsLPGandcreateWithTimestampsLPGtoSablierBatchLockupfollowing the same pattern used by the other models, along with the correspondingBatchLockup.CreateWithDurationsLPGandBatchLockup.CreateWithTimestampsLPGstructs.Granularity sentinel value mismatch between airdrop campaign and Lockup stream
Severity
- Severity: Informational
Submitted by
0xRajeev
Description
When creating a MerkleLL campaign, a
VESTING_GRANULARITYof 0 is stored as an immutable inSablierMerkleLL. However, when the claim creates the actual Lockup Linear stream, the 0 value is converted to 1 insideSablierLockupLinearas a sentinel (0 and 1 both mean "no discrete stepping, continuous streaming"). The on-chain streaming behavior is identical in both cases.The issue is that offchain indexers and UIs querying the two contracts see inconsistent values:
merkleLL.VESTING_GRANULARITY()returns 0, whilelockup.getGranularity(streamId)returns 1 for the same logical stream. This can cause confusion in dashboards, subgraphs or analytics tools that attempt to reconcile airdrop campaign parameters with the resulting Lockup streams.Recommendation
Normalize the granularity value at the airdrop contract level by converting 0 to 1 in the
SablierMerkleLLconstructor, so thatVESTING_GRANULARITY()andgetGranularity(streamId)always return the same value.Unchecked subtraction wraps to large value before timestamp validation, bypassing granularity check
Severity
- Severity: Informational
Submitted by
0xRajeev
Description
In
LockupHelpers._checkTimestampsAndUnlockAmounts(...), whencliffTimeis zero, the streamable range is calculated using an unchecked subtraction oftimestamps.end - timestamps.startbefore the start/end time ordering is validated. Iftimestamps.start > timestamps.end, the unchecked subtraction wraps to a very large uint40 value. This wrappedstreamableRangewill likely be larger than any reasonable granularity, so the granularity check passes silently. The function eventually reverts at thestart >= endcheck, but with a misleading execution path.When
cliffTime > 0, the ordering checks (timestamps.start < cliffTimeandcliffTime < timestamps.end) happen before the unchecked subtraction, by which pointcliffTime < timestamps.endandtimestamps.start < timestamps.endare guaranteed, so the subtraction cannot wrap.Recommendation
Consider moving the
timestamps.start >= timestamps.endcheck into theelsebranch before the unchecked subtraction so that invalid timestamp ordering is caught first with the correct error message and without relying on a later check to catch the underflow.LPG streams missing createWithTimestamps(...) variant breaks API consistency and prevents precise end dates
Severity
- Severity: Informational
Submitted by
0xRajeev
Description
All other Lockup models (Linear, Dynamic, Tranched) expose both
createWithDurations(...)andcreateWithTimestamps(...)variants for stream creation. LPG streams only supportcreateWithDurationsLPG(...)and do not have acreateWithTimestampsLPG(...)equivalent. This was an intentional decision due to contract size constraints that prevented including both functions within the bytecode size limit.The team chose the durations variant, but this creates a limitation: with
createWithTimestamps(...)the duration functionality can be abstracted in the UI, but withcreateWithDurations(...)it is not possible to guarantee an exact end date since the transaction inclusion time is unknown at submission (e.g., a user wanting the stream to end on 1st May at 00:00:00 GMT). This means LPG stream creators lack the granularity to specify precise start and end timestamps, which is particularly important for price-gated vesting where the exact end time determines when the recipient can unconditionally withdraw the full deposit regardless of oracle price.Recommendation
Consider swapping
createWithDurationsLPG(...)forcreateWithTimestamps(...)if only one variant can fit within the contract size limit, since the timestamps variant offers strictly greater flexibility. WithcreateWithTimestamps(...), duration-based creation can still be achieved in the UI by computing timestamps off-chain, but the reverse is not possible. If contract size allows in the future, adding both variants would bring LPG in line with other models.Incorrect NatSpec for LPG unlock condition specifies "greater than" end time but code uses >=
Severity
- Severity: Informational
Submitted by
0xRajeev
Description
The
ISablierLockupPriceGatedinterface documents the time-based unlock condition as: "The recipient can withdraw the full deposited amount when either: 1. The oracle price reaches or exceeds the target price, OR 2. Current time is greater than the stream's end time."However,
calculateStreamedAmountLPG(...)inLockupMathusesblock.timestamp >= endTime, which unlocks the full deposit atendTimeitself. The NatSpec says "greater than" (implying >) while the code implements "greater than or equal to" (>=). This could mislead integrators who rely on the documented behavior to build automation around the exact unlock boundary.Given that LL/LD/LT streams all have the
block.timestamp >= endTimecheck, this is a specification error.Recommendation
Consider aligning the NatSpec with the code by changing the documentation to: "Current time is greater than or equal to the stream's end time" or "Current time has reached the stream's end time."
MerkleExecute does not support targets that require ETH alongside token approval
Severity
- Severity: Informational
Submitted by
0xRajeev
Description
SablierMerkleExecute.claimAndExecute(...)approves the target to spend the claimed token amount and then executes a low-level call with the configured selector and user-provided arguments. However, the call is made with zeromsg.value.While
claimAndExecute(...)is marked payable (inherited from the fee payment mechanism in_preProcessClaim(...)), any ETH remaining after the fee is not forwarded to the target. This means targets that require ETH alongside a token approval (e.g., protocols that charge an ETH fee for staking/depositing, or wrapping operations that need both token approval and native ETH) are not supported.A campaign creator who sets a target expecting ETH with the call would result in all claims reverting, since the target receives no ETH. The claim fee is already consumed in
_preProcessClaim(...), so even if the claimant sends extra ETH, it is not forwarded to the target.Recommendation
Consider allowing the claimant to specify an additional ETH amount to forward to the target by changing the call to
(bool success, bytes memory returnData) = TARGET.call{ value: ethAmount }(callData);, whereethAmountis the remainingmsg.valueafter the claim fee has been deducted. Alternatively, if this is an intentional limitation, consider documenting it in theISablierMerkleExecuteinterface NatSpec so that campaign creators are aware that targets requiring ETH are not supported.LPG oracle is not validated against the deposited token, allowing use of unrelated price feeds
Severity
- Severity: Informational
Submitted by
0xRajeev
Description
When creating an LPG stream,
validateOracle(...)checks that the oracle implementsAggregatorV3Interface, returns 8 decimals and reports a positive price, but does not verify that the oracle actually tracks the price of the deposited token. A sender could use an oracle for an unrelated asset accidentally, which is a risk for recipients.This is not a theoretical concern. There have been real-world incidents where BTC/USD feeds were used for WBTC and ETH/USD feeds were used for stETH or cbETH, both assuming a 1:1 peg that caused issues during depeg events. In the LPG context, using an incorrect oracle could cause the stream to unlock prematurely (if the unrelated asset's price exceeds the target) or never unlock before
endTime(if the unrelated asset's price remains below the target).There is no practical on-chain mitigation since the protocol cannot trustlessly verify that an oracle corresponds to a specific token. While Chainlink feeds expose a
description()function (e.g., "ETH / USD"), this is a string that is easily spoofed by a custom contract and does not prove the feed actually tracks that pair.Recommendation
Consider one or more of the following mitigations:
- Store the oracle's
description()at creation time and surface it prominently in the UI so recipients can cross-reference the oracle's reported pair against the deposited token and identify mismatches before accepting the stream. - Maintain an oracle registry or allowlist at the protocol level, mapping approved oracle addresses to their corresponding tokens, so that
checkCreateLPG(...)can validate the oracle is appropriate for the deposited token. - At minimum, document this risk prominently in the
ISablierLockupPriceGatedinterface NatSpec andSECURITY.mdso that recipients and integrators are aware that verifying the oracle-token relationship is their responsibility.
Enforcing 8 decimal oracles contradicts design goal of supporting sender-chosen oracles from any vendor
Severity
- Severity: Informational
Submitted by
0xRajeev
Description
SafeOracle.validateOracle(...)enforces that the oracle returns exactly 8 decimals. While Chainlink and few others have USD price feeds use 8 decimals, there are other oracle providers that implement theAggregatorV3Interfaceuse 18 decimals. Chronicle oracles, for example, return 18 decimals despite offering Chainlink-compatible adapters. API3 price feeds similarly use 18 decimals. This means LPG streams and fee calculations in_convertUSDFeeToWei(...)cannot use these oracle providers directly.The LPG model is built around the premise that senders can choose their own oracle from any vendor, as evidenced by the code comment in
safeOraclePrice(...): "Because users may not always use Chainlink oracles, we do not check for the staleness of the price." However, the 8-decimal enforcement effectively restricts oracle choice to Chainlink/Pyth/Redstone (and any others using 8-decimals), undermining this flexibility. Users who want to use Chronicle or API3 feeds must deploy wrapper contracts that convert 18-decimal prices to 8-decimal, adding complexity, gas overhead and an additional point of failure.Recommendation
Consider making the decimal requirement configurable rather than hardcoded to 8. The oracle's
decimals()return value could be stored alongside the oracle address at LPG stream creation time and used to normalize the price insafeOraclePrice(...)andcalculateStreamedAmountLPG(...). This would allow the protocol to support oracles of any decimal precision while maintaining correct price comparisons against the target price.Attested users who cannot transact are locked out due to missing delegation in attestation claims
State
- Acknowledged
Severity
- Severity: Informational
Submitted by
0xRajeev
Description
Unlike
claimViaSig(...), which allows anyone to submit on behalf of the recipient,claimViaAttestation(...)has no equivalent delegation path. The function hardcodes msg.sender as the recipient:_verifyAttestationSignature(msg.sender, attestation);_preProcessClaim({ index: index, recipient: msg.sender, ... });If a recipient is attested but unable to submit transactions (e.g., recipient's newly created wallet has no native token to pay for gas, which is common with new CEX users onboarded via KYC), they cannot claim.
claimViaSig(...)is gated toClaimType.DEFAULTand is unavailable as a fallback forATTESTcampaigns.Example scenario:
- The attestor signs an attestation for Alice's wallet address.
- Alice wants to claim but cannot submit
claimViaAttestation(...)because her wallet has no gas. - Bob (a relayer/friend) offers to submit the claim on Alice's behalf.
- Bob cannot call
claimViaAttestation(...)because the function usesmsg.senderas the recipient, so it would verify Bob's address against Alice's attestation and fail. - Bob cannot call
claimViaSig(...)because it is gated toClaimType.DEFAULTand reverts withUnsupportedClaimType(DEFAULT, ATTEST)for this campaign. - Alice is locked out until she independently funds her wallet with ETH.
Recommendation
Consider adding a
claimViaAttestationFor(...)variant that takes an explicit recipient parameter and performs_verifyAttestationSignature(recipient, attestation), similar to howclaimViaSig(...)separates submitter from recipient. This allows any third party to submit the transaction while the attestation still binds to the intended recipient.Sablier
We do acknowledge this but have decided to not implement it in the first version of claim-via-attestation feature. We will, however, consider it in the future upgrades.
Cantina
Acknowledged.
Attestor management gated by FEE_MANAGEMENT_ROLE violates the principle of least privilege
Severity
- Severity: Informational
Submitted by
0xRajeev
Description
In
SablierComptroller, bothsetAttestor(...)andsetAttestorForCampaign(...)are gated byFEE_MANAGEMENT_ROLE. This same role also controls fee-related functions such asdisableCustomFeeUSDFor(...),setCustomFeeUSDFor(...),setMinFeeUSDFor(...)andsetMinFeeUSDForCampaign(...). Attestor management (controlling who can verify KYC/identity for airdrop campaigns) is a fundamentally different responsibility from fee management but yet, both are controlled by the same role.This conflation violates the principle of least privilege. An operator granted
FEE_MANAGEMENT_ROLEto manage protocol fees also gains the ability to rotate attestors across all campaigns, potentially invalidating pending attestations or redirecting KYC verification to a compromised key. Conversely, an operator who only needs to manage attestors must be granted fee management capabilities they do not need.Recommendation
Consider introducing a dedicated role (e.g.,
ATTESTOR_MANAGEMENT_ROLE) forsetAttestor(...)andsetAttestorForCampaign(...), separate fromFEE_MANAGEMENT_ROLE. This allows the protocol to grant attestor management to a KYC/compliance operator without also granting them the ability to modify fees, and vice versa.Unused Merkle Campaign Validation Errors in Comptroller
Severity
- Severity: Informational
Submitted by
Arno
Description
The
Errorslibrary defines two errors:SablierComptroller_IsSablierMerkleReturnsFalseSablierComptroller_MissingIsSablierMerkle
However, these errors are never used.
Recommendation
Remove the unused errors
Incorrect comment for setAttestor overstates impact of setting zero address
Severity
- Severity: Informational
Submitted by
0xRajeev
Description
In
uISablierComptroller.sol, the comment forsetAttestor(...)states: "Setting it to zero address would disable attestation-based claims globally." However, this is only true for campaigns that rely on the comptroller's global attestor fallback. Campaigns that have set their own local attestor viasetAttestor(...)on the campaign contract are unaffected, since_getAttestor(...)inSablierMerkleSignaturereturns the local attestor when it is non-zero.The comment is misleading and could give operators false confidence that setting the global attestor to zero disables attestation across all campaigns.
Recommendation
Consider updating the comment to accurately reflect the behavior, e.g.: "Setting it to zero address would disable attestation-based claims for campaigns that have not set their own local attestor."
Empty array in duration-based creation functions reverts with raw panic instead of custom error
Severity
- Severity: Informational
Submitted by
Arno
Description
When
createWithDurationsLDis called with an empty segments array, it reverts with a raw SolidityPanic(0x32)fromcalculateSegmentTimestampsbefore reaching the proper validation. The equivalentcreateWithTimestampsLDcorrectly reverts with the custom errorSablierLockupHelpers_SegmentCountZero.Same issue exists increateWithDurationsLT.Recommendation
The fix is to add an early
segmentsWithDuration.length == 0guard increateWithDurationsLD.