Organization
- @sablier-labs
Engagement Type
Cantina Reviews
Period
-
Repositories
Researchers
Findings
Medium Risk
1 findings
1 fixed
0 acknowledged
Low Risk
2 findings
2 fixed
0 acknowledged
Informational
8 findings
2 fixed
6 acknowledged
Gas Optimizations
1 findings
0 fixed
1 acknowledged
Medium Risk1 finding
Incorrect cliff time handling in streaming start calculation
Summary
The streaming start time calculation in the lockup contract incorrectly uses the cliff unlock amount instead of the cliff timestamp to determine when streaming should begin. This causes tokens to be unlocked too early when a stream has a cliff time configured but no cliff unlock amount, violating the intended vesting schedule.
Description
The
get_streamed_amount()
function inlockup_math.rs
incorrectly determines the streaming start time by checking thecliff_unlock
amount instead of thecliff
timestamp. This causes issues when a stream has a cliff time configured but no cliff unlock amount.The current logic at line 34-38:
let start_time: i64 = if amounts.cliff_unlock == 0 { timestamps.start} else { timestamps.cliff};
This approach uses the cliff unlock amount as a proxy for determining whether streaming should start at the start time or cliff time. However, it's possible to have a valid stream configuration where:
- A cliff time is set (
timestamps.cliff > 0
) - No tokens are unlocked at the cliff (
amounts.cliff_unlock == 0
) - Some tokens may be unlocked at start (
amounts.start_unlock_amount > 0
)
In this scenario, the streaming should wait until the cliff time before beginning, but the current logic would incorrectly start streaming immediately because
cliff_unlock == 0
.Impact Explanation
The impact is medium to high as tokens are unlocked earlier than intended, potentially violating vesting agreements and allowing recipients to access funds before the configured cliff period. While this doesn't result in total loss of funds, it breaks the core timing guarantees of the streaming protocol.
Likelihood Explanation
The likelihood is medium as this affects any stream configured with a cliff time but zero cliff unlock amount. While this may be a less common configuration, it is explicitly supported by the validation logic and represents a legitimate use case for vesting schedules.
Recommendation
Update the streaming start time calculation to check the cliff timestamp instead of the cliff unlock amount:
- let start_time: i64 = if amounts.cliff_unlock == 0 {+ let start_time: i64 = if timestamps.cliff == 0 { timestamps.start} else { timestamps.cliff};
- A cliff time is set (
Low Risk2 findings
Canceled streams continue accruing withdrawable amounts over time
Summary
The stream cancellation mechanism in the Sablier lockup contract contains a vulnerability where canceled streams continue to accrue withdrawable amounts over time instead of being frozen at the cancellation state. This occurs because the
get_streamed_amount()
function does not account for canceled or depleted stream states, allowing recipients to attempt withdrawals of amounts that were already refunded to the sender.Description
When a stream is canceled through the
cancel()
function, the contract correctly:- Calculates the streamed amount at the time of cancellation
- Refunds the unstreamed portion to the sender
- Marks the stream as canceled (
was_canceled = true
) - Sets
amounts.refunded
to track the refunded amount
However, the
get_streamed_amount()
function inlockup_math.rs
continues to calculate increasing streamed amounts over time without checking if the stream was canceled or depleted. This leads to several issues:- The
withdrawable_amount
continues to grow after cancellation based on time progression - Recipients can attempt to withdraw more than the amount that should be available
- While
transfer_tokens()
will ultimately fail due to insufficient balance in the stream account, this creates poor user experience and allows transaction fees to be wasted
Consider this scenario:
- Creator deposits 100 tokens
- Stream is 20% complete (streamed_amount = 20, recipient hasn't withdrawn)
- Stream is canceled: sender gets 80 tokens refunded, 20 tokens remain for recipient
- Time passes and streamed amount calculation reaches 40 tokens
- Recipient attempts
withdraw_max()
expecting 40 tokens but only 20 are available
Impact Explanation
The impact is low as normal withdrawals can still be used with the correct capped amount. While the
withdraw_max()
function may attempt to withdraw more than available, individual withdrawals with specific amounts will work properly up to the actual available balance. The main issues are poor user experience and perhaps wasted transaction fees.Likelihood Explanation
The likelihood is medium as this will occur when streams are canceled and recipients subsequently use
withdraw_max()
rather than individual withdrawals, or whenwithdrawable_amount_of()
is checked. Users can work around the issue by using normal withdraw functions with appropriate amounts, though they may not be aware of this limitation.Recommendation
Modify the
get_streamed_amount()
function to properly handle canceled and depleted stream states:- pub fn get_streamed_amount(timestamps: &Timestamps, amounts: &Amounts) -> u64 {+ pub fn get_streamed_amount(timestamps: &Timestamps, amounts: &Amounts, is_depleted: bool, was_canceled: bool) -> u64 {+ // If the stream is depleted, return the withdrawn amount+ if is_depleted {+ return amounts.withdrawn;+ }+ + // If the stream was canceled, return deposited minus refunded amount+ if was_canceled {+ return amounts.deposited - amounts.refunded;+ } let now = Clock::get().unwrap().unix_timestamp; // If the start time is in the future, return zero. if timestamps.start > now { return 0; } // ... rest of existing logic}
Potential overflow in fee calculation for non-standard oracle decimals
Summary
The fee calculation function in the Sablier lockup contract contains a potential integer overflow when processing Chainlink oracle feeds with non-standard decimal formats. While current Chainlink feeds use 8 decimals, the code attempts to handle arbitrary decimal counts using u64 arithmetic, which can overflow for larger decimal values and high USD fees.
Description
The
convert_usd_fee_to_lamports()
function calculates withdrawal fees by converting USD amounts to lamports using Chainlink price feeds. For non-8-decimal feeds, the calculation uses:fee_usd * 10_u64.pow(1 + decimals as u32) / price
This calculation can overflow when using u64 arithmetic. For example:
fee_usd = 2e8
(representing $2 with 8 decimals)decimals = 12
fee_usd * 10^(12+1) = 2e8 * 10^13 = 2e21
Since the maximum value for u64 is approximately 1.8e19, this calculation would overflow and cause the transaction to panic.
While the protocol expects to use 8-decimal Chainlink feeds, the code includes logic to handle other decimal formats, creating a potential vulnerability if non-standard feeds are ever used or if Chainlink changes their decimal format in the future.
Impact Explanation
The impact is medium as an overflow would cause withdrawal transactions to fail completely, preventing users from accessing their funds. However, the issue only affects scenarios with non-standard oracle decimal formats, which are not currently used by Chainlink.
Likelihood Explanation
The likelihood is low as current Chainlink SOL/USD feeds use 8 decimals, and the protocol team intends to only use 8-decimal feeds. The overflow would only occur if feeds with higher decimal counts were used, which is unlikely given current Chainlink standards.
Recommendation
Consider using u128 arithmetic for the fee calculation to handle arbitrary decimal formats safely:
- let fee_in_lamports: u64 = match oracle_decimals {+ let fee_in_lamports: u128 = match oracle_decimals { 8 => { // If the oracle decimals are 8, calculate the fee.- fee_usd * LAMPORTS_PER_SOL / price+ fee_usd as u128 * LAMPORTS_PER_SOL as u128 / price as u128 } decimals => { // Otherwise, adjust the calculation to account for the oracle decimals.- fee_usd * 10_u64.pow(1 + decimals as u32) / price+ fee_usd as u128 * 10_u128.pow(1 + decimals as u32) / price as u128 }}; + // Safely convert back to u64, capping at u64::MAX if needed+ fee_in_lamports.min(u64::MAX as u128) as u64
This approach provides safety against overflow while maintaining the current behavior for 8-decimal feeds and handling edge cases where the calculated fee exceeds u64 limits.
Informational8 findings
The chain does not verify campaign argument correctness
State
- Acknowledged
Severity
- Severity: Informational
Submitted by
Rikard Hjort
Description
The chain does not verify that the Merkle root is correct -- which is sensible. But it should be performed on the frontend to avoid fake and scam campaigns.
For example, a malicious user can create a fake campaign with a well-known token and a large
aggregate_amount
. This encourages spam -- as it might be listed high on a leaderboard or similar -- and can crowd out visibility of real airdrops, as well as creating fake legitimacy for scam airdrops.Currently, the team has described the frontend as presenting airdrops as they are, with the assumption that they are only created through their frontend.
Recommendation
It is easy to check if a campaign is valid and/or trustworthy in three main dimensions:
- is the campaign Merkle root a true representation of a real tree with valid leaves with matching
aggregate_amount
,recipient_count
, etc.? - is the campaign fully funded, based on the total amounts in the Merkle tree?
- is the campaign currently in a state where it can be clawed back at any time?
The frontent can calculate the Merkle root from the IPFS data and check the correctness of the on-chain Merkle root, and determine whether or not the campaign is fully funded. This could be used to filter out fake or problematic airdrops and give the users relevant information on which claims may fail due to a lack of funds, and encourage proper campaign creation.
In that vein, it could also be sensible to add a boolean indicating whether or not the campaign has been clawed back.
Timestamps are not validated
State
- Fixed
PR #249
Severity
- Severity: Informational
Submitted by
Rikard Hjort
Description
Timestamps are not validated on campaign creation. They could be less than the current timestamp or even negative. While this seems to cause no direct bugs -- they are simply "in the past", or, in the case of
expiration_time
being0
, treatead as infinity (no expiry) -- it is an unnecessary complexity to allow.It may also cause strange UI behavior, since the UI would have to validate and correctly deal with negative timestamps.
Recommendation
Check on creation that the campaign timestamps are
>= 0
.Program initialization can be front-run by malicious actors
State
- Acknowledged
Severity
- Severity: Informational
Submitted by
phaze
Description
The Sablier lockup and merkle_instant program's initialization functions lack access control, allowing any user to initialize the programs with arbitrary parameters, potentially setting malicious fee collectors or oracle configurations.
The
initialize()
function accepts critical configuration parameters including:fee_collector
: The address that receives withdrawal feeschainlink_program
: The Chainlink program address for price feedschainlink_sol_usd_feed
: The SOL/USD price feed address
Since the function has no access controls and can only be called once, a malicious actor monitoring transactions could front-run the legitimate initialization with their own parameters. This would allow them to:
- Set themselves as the fee collector to steal all withdrawal fees
- Provide a malicious oracle feed to manipulate fee calculations
- Configure incorrect Chainlink program addresses
While the legitimate deployers would notice the failed initialization and could redeploy, this creates operational risks and potential for silent failures if deployment scripts are not robust enough to detect the front-running.
Recommendation
Consider adding access control to the initialization function by checking that the initializer is the upgrade authority.
Additionally, ensure deployment scripts include robust verification that the initialization was performed with the correct parameters and by the intended authority.
Account creation can be blocked through predictable address pre-creation
State
- Acknowledged
Severity
- Severity: Informational
Submitted by
phaze
Description
The stream creation process in the Sablier lockup contract uses deterministic addresses for creating accounts, which can be exploited by attackers to block legitimate transactions through preemptive account creation.
Specifically, the
recipient_stream_nft_ata
account creation increate_with_timestamps.rs
uses predictable parameters:#[account( init, payer = creator, associated_token::mint = stream_nft_mint, associated_token::authority = recipient, associated_token::token_program = nft_token_program,)]pub recipient_stream_nft_ata: Box<InterfaceAccount<'info, TokenAccount>>,
If an attacker can predict the
sender
,salt
, andrecipient
parameters, they can pre-create the associated token account at the expected address, causing the legitimate stream creation transaction to fail when it attempts to initialize an already-existing account.A similar issue exists in the merkle instant program where the
campaign_ata
account uses deterministic addressing based on campaign parameters that might be discoverable before the official campaign launch.While the likelihood is reduced by the ability to use unpredictable salts and retry with different parameters, the potential for denial of service exists when account addresses can be anticipated.
Recommendation
Consider using
init_if_needed
instead ofinit
for accounts that could potentially be pre-created by attackers:/// Create account: the ATA for the stream NFT owned by the recipient.#[account(- init,+ init_if_needed, payer = creator, associated_token::mint = stream_nft_mint, associated_token::authority = recipient, associated_token::token_program = nft_token_program,)]pub recipient_stream_nft_ata: Box<InterfaceAccount<'info, TokenAccount>>,
This change allows the transaction to proceed even if the account was pre-created, while maintaining the same functionality. The approach is beneficial for associated token accounts where pre-creation by attackers would not compromise the account's security or functionality, only its creation timing.
Use of deprecated Metaplex token metadata functions
State
- Acknowledged
Severity
- Severity: Informational
Submitted by
phaze
Description
The Sablier lockup contract uses deprecated Metaplex token metadata functions that have been replaced with newer alternatives. Specifically, the code uses
create_metadata_accounts_v3
andCollectionDetails::V1
which have been deprecated in favor of thecreate_v1
instruction.The deprecated functions are found in:
create_metadata_accounts_v3
at line 55 innft.rs
set_and_verify_sized_collection_item
at line 104 innft.rs
CollectionDetails::V1
at line 184 innft.rs
According to Metaplex documentation,
CreateMetadataAccountV3
has been replaced withCreateV1
, which provides a more streamlined approach by directly creating the mint and master edition in a single instruction rather than requiring separate calls.The
set_and_verify_sized_collection_item
function is also deprecated, as it both verifies the collection item and increments the collection size. The newer approach uses theverify()
instruction, which callsverify_collection_v1()
internally, providing a cleaner separation of concerns.Additionally, the entire collection size tracking mechanism appears to be deprecated in the newer Metaplex implementations. The
CollectionDetails::V1
approach for tracking collection size has been superseded, though the current implementation correctly uses an independenttotal_supply
tracking mechanism in thenft_collection_data
account.While the deprecated functions continue to work and currently don't introduce security vulnerabilities, using outdated APIs may lead to future compatibility issues and indicates the codebase is not following current best practices.
Recommendation
Consider upgrading to the newer Metaplex token metadata API when feasible:
- Replace
create_metadata_accounts_v3
with thecreate_v1
instruction - Update the collection creation logic to use the current recommended patterns
- Ensure thorough testing of the new implementation across both backend and frontend integrations
This upgrade would improve long-term maintainability and ensure compatibility with future Metaplex updates. Given the complexity of the changes required, this could be considered for a future release.
NFT names can be mined to match existing streams for potential phishing
State
- Acknowledged
Severity
- Severity: Informational
Submitted by
phaze
Description
The NFT naming scheme in the Sablier lockup contract creates stream NFT names using a truncated format that displays only the first and last 5 characters of the mint key:
let nft_name = format!("{NFT_NAME_PREFIX}{}...{}", &mint_key[..5], &mint_key[mint_key.len() - 5..]);
This format results in names like "Sablier LL Stream #2qidf...dm8jF" where the middle portion of the mint key is hidden. An attacker could potentially mine mint keys (by trying different salt values) to create streams with NFT names that visually match existing legitimate streams.
Since the salt parameter is user-controlled in the stream creation process, an attacker could iterate through different salt values until they generate a mint key that produces the same truncated display format as a target stream. This could enable phishing attacks where malicious streams appear to have the same name as legitimate ones in wallet interfaces or NFT marketplaces.
While this requires significant computational effort and the full mint keys would still be different, users who only look at the displayed NFT name might be deceived into interacting with the wrong stream.
Recommendation
Consider using a more collision-resistant naming scheme, such as:
- Use the full mint key: Display the complete mint address to eliminate any possibility of visual collision
- Use a longer truncation: Increase the number of characters displayed from each end of the mint key to make collision mining computationally infeasible
For example:
// Option 1: Full mint keylet nft_name = format!("{NFT_NAME_PREFIX}{}", mint_key); // Option 2: Longer truncation (first 8, last 8)let nft_name = format!("{NFT_NAME_PREFIX}{}...{}", &mint_key[..8], &mint_key[mint_key.len() - 8..]);
IPFS CID omitted from campaign account seeds
State
- Acknowledged
Severity
- Severity: Informational
Submitted by
phaze
Description
The campaign account creation in the merkle instant program excludes the
ipfs_cid
parameter from the deterministic address generation seeds, while including other campaign parameters likemerkle_root
,campaign_start_time
,expiration_time
, andname
.The current seed structure at lines 42-50:
seeds = [ CAMPAIGN, creator.key().as_ref(), merkle_root.as_ref(), campaign_start_time.to_le_bytes().as_ref(), expiration_time.to_le_bytes().as_ref(), name.as_ref(), airdrop_token_mint.key().as_ref(),],
The
ipfs_cid
is passed as a parameter and stored in the campaign account but not used for address derivation. This omission was made because IPFS CIDs can exceed the 32-byte limit imposed by Anchor for seed parameters.However, this creates a scenario where two campaigns with identical parameters (same creator, merkle root, timing, name, and token) but different IPFS CIDs would attempt to use the same account address, potentially causing conflicts during campaign creation. While this is a very unlikely scenario in practice, as it would require a creator to launch multiple campaigns with identical parameters except for the IPFS CID, it represents a potential edge case.
Recommendation
Consider including the IPFS CID in the seed derivation by hashing it to fit within the 32-byte constraint:
seeds = [ CAMPAIGN, creator.key().as_ref(), merkle_root.as_ref(), campaign_start_time.to_le_bytes().as_ref(), expiration_time.to_le_bytes().as_ref(), name.as_ref(), airdrop_token_mint.key().as_ref(), hash(&ipfs_cid).as_ref(), // Add hashed CID to ensure uniqueness],
Using a secure hash function like SHA256 or Keccak256 on the IPFS CID would provide 256-bit security, which should be sufficient for this use case, while fitting within the seed size constraints. While this doesn't provide identical security to the original CID, the 256-bit collision resistance is cryptographically secure for all practical purposes. This approach ensures that campaigns with different IPFS CIDs generate different account addresses, preventing potential conflicts while maintaining strong cryptographic security.
Code quality issues
Description and Recommendations
- Incorrect comment in clawback function
The comment at line 81 in
clawback.rs
incorrectly states the transfer destination:- // Interaction: transfer tokens from the Campaign's ATA to the campaign creator's ATA.+ // Interaction: transfer tokens from the Campaign's ATA to the clawback recipient's ATA.
- Inconsistent function argument order
The
create_campaign
function inlib.rs
and its handler have different parameter orders. Consider updating the handler function signature to match the order in the main function for consistency:// In lib.rspub fn create_campaign( ctx: Context<CreateCampaign>, merkle_root: [u8; 32], campaign_start_time: i64, expiration_time: i64, name: String, ipfs_cid: String, aggregate_amount: u64, recipient_count: u32,) -> Result<()> // Handler should match this order
- Inconsistent event field order
The
FeesCollected
event emission at line 47 incollect_fees.rs
has fields in the wrong order. Update to match the event definition:emit!(events::FeesCollected {+ fee_amount: collectible_amount, fee_collector: ctx.accounts.fee_collector.key(), fee_recipient: ctx.accounts.fee_recipient.key(),- fee_amount: collectible_amount});
Gas Optimizations1 finding
Redundant seed checks for Metaplex accounts
State
- Acknowledged
Severity
- Severity: Gas optimization
Submitted by
phaze
Description
The stream creation instruction includes explicit seed validation constraints for Metaplex token metadata accounts (
nft_collection_master_edition
andnft_collection_metadata
) that may be redundant since the Metaplex program itself already performs these validations during execution.The seed constraints at lines 58-83 in
create_with_timestamps.rs
:#[account( seeds = [ METADATA, token_metadata_program.key().as_ref(), nft_collection_mint.key().as_ref(), EDITION, ], seeds::program = token_metadata_program.key(), bump,)]pub nft_collection_master_edition: UncheckedAccount<'info>, #[account( mut, seeds = [ METADATA, token_metadata_program.key().as_ref(), nft_collection_mint.key().as_ref(), ], seeds::program = token_metadata_program.key(), bump,)]pub nft_collection_metadata: UncheckedAccount<'info>,
While these constraints don't cause functional issues, they duplicate validation logic that the Metaplex program already enforces. The redundant checks provide some benefits including earlier failure detection and clearer error messages, but they also add computational overhead during instruction context validation.
Recommendation
Consider removing the redundant seed constraints for optimization:
- #[account(- seeds = [- METADATA,- token_metadata_program.key().as_ref(),- nft_collection_mint.key().as_ref(),- EDITION,- ],- seeds::program = token_metadata_program.key(),- bump,- )]+ #[account()]/// CHECK: This account will only be touched by the Metaplex programpub nft_collection_master_edition: UncheckedAccount<'info>, - #[account(- mut,- seeds = [- METADATA,- token_metadata_program.key().as_ref(),- nft_collection_mint.key().as_ref(),- ],- seeds::program = token_metadata_program.key(),- bump,- )]+ #[account(mut)]/// CHECK: This account will only be touched by the Metaplex programpub nft_collection_metadata: UncheckedAccount<'info>,
This optimization reduces the computational overhead of instruction context validation while maintaining security, as the Metaplex program will still enforce proper account relationships during execution.