Buck Labs: Smart Contracts
Cantina Security Report
Organization
- @bucklabs
Engagement Type
Spearbit Web3
Period
-
Repositories
Researchers
Findings
High Risk
3 findings
3 fixed
0 acknowledged
Medium Risk
6 findings
4 fixed
2 acknowledged
Low Risk
8 findings
7 fixed
1 acknowledged
Informational
16 findings
14 fixed
2 acknowledged
Gas Optimizations
6 findings
5 fixed
1 acknowledged
High Risk3 findings
ABI struct mismatch in band config
Description
LiquidityWindowdefines a localIPolicyManager.BandConfigstruct that does not match the layout used byPolicyManager. This compiles because function selectors depend only on the function name and inputs, but the return value is ABI-decoded using the caller's struct layout. The result is thatLiquidityWindowdecodes the wrong fields fromPolicyManager.getBandConfig.In
LiquidityWindowthe interface is:struct BandConfig { uint16 mintFeeBps; uint16 refundFeeBps; uint16 halfSpreadBps; uint16 alphaBps; uint16 floorBps; uint16 dexBuyFeeBps; uint16 dexSellFeeBps;}In
PolicyManagerthe actual struct is:struct BandConfig { uint16 halfSpreadBps; uint16 mintFeeBps; uint16 refundFeeBps; uint32 oracleStaleSeconds; uint16 deviationThresholdBps; uint16 alphaBps; uint16 floorBps; uint16 distributionSkimBps; CapSettings caps;}LiquidityWindow.requestRefundfetches the config and usesfloorBpsto compute the reserve floor:IPolicyManager.BandConfig memory bandConfig = IPolicyManager(policyManager).getBandConfig(params.currentBand);uint256 floor = _calculateFloor(bandConfig);uint256 floorAmount18 = (totalSupply * config.floorBps) / BPS_DENOMINATOR;Because
LiquidityWindowexpectsfloorBpsas the fifth item, it decodes the fifth field ofPolicyManager.BandConfig, which isdeviationThresholdBps. With the default settings,deviationThresholdBpsis 25/50/100 whilefloorBpsis 100, so the computed floor is too small in Green and Yellow.Impact: refunds can drain reserves down to
deviationThresholdBpsinstead of the configuredfloorBps, reducing the intended liquidity floor and weakening reserve protection.Recommendation
Consider updating
LiquidityWindowto use the exact sameBandConfigdefinition asPolicyManager, ideally by importing a shared interface. If only the floor is needed, replace the struct return with a dedicated getter, for example:function getBandFloorBps(Band band) external view returns (uint16);Buck Labs: Fixed in commit 5d79958.
Cantina: Fix verified. Commit 5d79958 fixes the ABI struct mismatch by removing the local
BandConfigdecode inLiquidityWindowand using a dedicated floor getter instead:src/liquidity/LiquidityWindow.sol nowcallsgetBandFloorBpsand passes the returneduint16into_calculateFloor, so it no longer decodes the wrong field fromPolicyManager.BandConfig.src/policy/PolicyManager.soladdsgetBandFloorBps, matching the recommendation for a dedicated getter.
Eligible supply is not reduced on late entry
Description
RewardsEngineintegrates global eligible units usingcurrentEligibleSupply, which is intended to represent the sum of balances that are currently eligible to earn. In the late-entry path of_handleInflow, the account is marked ineligible but the prior eligible balance is not removed fromcurrentEligibleSupply. If the account already had an eligible balance before the late inflow, the global integrator keeps counting that balance even though the account will no longer accrue units.bool isLateEntry = (checkpointStart > 0 && now_ >= checkpointStart && now_ < epochEnd);if (isLateEntry) { s.eligible = false; s.lastAccrualTime = now_;} else { if (!s.excluded) { s.eligible = true; currentEligibleSupply += amount; }}if (elapsed > 0 && currentEligibleSupply > 0) { globalEligibleUnits += currentEligibleSupply * elapsed;}This breaks the intended relationship between the per-account accruals and the global denominator used at distribution time. The contract then computes
deltaIndexusing a denominator that includes balances that are no longer eligible, so it dilutes rewards for eligible accounts and decouplesglobalEligibleUnitsfrom the sum of actual eligible units.Impact: rewards can be systematically diluted and distribution accounting becomes inconsistent because the denominator includes balances that are no longer earning.
Recommendation
When an account becomes ineligible during
_handleInflow, subtract its eligible balance fromcurrentEligibleSupplybefore flipping the flag. If eligibility can be partial per-balance, track an explicit eligible balance per account and updatecurrentEligibleSupplyusing that value.Buck Labs: Fixed in commit 559b098.
Cantina: Fix verified. Commit 559b098 fixes the issue by removing the late‑entry disqualification that inflated
currentEligibleSupplyand replacing it with late‑inflow accounting, so the denominator only reflects earning balance.src/rewards/RewardsEngine.soladdslateInflow/lateInflowEpoch, treats late inflows as non‑earning and keepscurrentEligibleSupplyunchanged for those tokens; outflows now reduce eligible supply only for the earning portion, so the global denominator stays consistent.- The accrual paths are updated to use
earningBalance(balance minus late inflow), preventing dilution from ineligible units.
totalExcludedSupply drifts from reality
Description
RewardsEnginetrackstotalExcludedSupplyas the sum of balances for excluded accounts, but it only updates this value inside thesetAccountExcluded()andsetBreakageSink()functions. Excluded accounts can still receive and send STRX through normal transfers and reward mints and those balance changes go through the token hook without adjustingtotalExcludedSupply. As a result,totalExcludedSupplydrifts from the real excluded balances over time.function setAccountExcluded(address account, bool isExcluded) external onlyRole(ADMIN_ROLE) { ... if (isExcluded) { totalExcludedSupply += s.balance; ... } else { if (totalExcludedSupply >= s.balance) { totalExcludedSupply -= s.balance; } else { totalExcludedSupply = 0; } }}function onBalanceChange(address from, address to, uint256 amount) external onlyToken { if (from != address(0) && from != address(this)) { _handleOutflow(from, amount); } if (to != address(0) && to != address(this)) { _handleInflow(to, amount); }}Epoch configuration relies on this value to set the starting eligible supply:
currentEligibleSupply = IERC20(token_).totalSupply() - totalExcludedSupply;If
totalExcludedSupplyis stale,currentEligibleSupplybecomes incorrect. Because excluded balances can increase via reward mints and transfers, this causes a persistent mismatch between the actual excluded supply and what the system uses for denominators during distributions. In extreme cases where excluded balances shrink materially and the tracked value stays high, the subtraction can underflow and revert, blocking epoch configuration.Impact: reward allocation is computed with an incorrect denominator and epoch configuration can revert if
totalExcludedSupplyexceeds total supply due to drift.Recommendation
Consider updating
totalExcludedSupplyon every balance change for excluded accounts. On the token hook path, subtractamountwhen an excluded account is the sender and addamountwhen an excluded account is the recipient, including mint and burn flows. If that is not safe, replacetotalExcludedSupplywith a more robust mechanism that can not drift, such as tracking excluded balances per account and summing them on-demand.Buck Labs: Fixed in commit 8105ffb.
Cantina: Fix verified. Commit 8105ffb fixes the issue by updating
totalExcludedSupplyon every excluded account balance change.src/rewards/RewardsEngine.solnow subtractsamountfromtotalExcludedSupplyin_handleOutflowwhens.excludedand adds amount in_handleInflowwhens.excluded. This keepscurrentEligibleSupply = totalSupply - totalExcludedSupplyaccurate and prevents the drift/underflow path described.
Medium Risk6 findings
Late-entry disqualification is griefable
Description
RewardsEnginemarks an account ineligible for the remainder of the epoch on any inflow aftercheckpointStart. The hook is called for all transfers and all mints and the hook does not have context to distinguish a true late entry from a dust transfer or a reward mint. This means a third party can send a tiny amount of STRX to a victim during the checkpoint window and disqualify the victim for the entire epoch, even if the victim held their balance since epoch start. Claims also mint STRX to the claimant and therefore trigger the same inflow path, so claiming rewards during the checkpoint window can self-disqualify an account for the current epoch.bool isLateEntry = (checkpointStart > 0 && now_ >= checkpointStart && now_ < epochEnd);if (isLateEntry) { s.eligible = false; s.lastAccrualTime = now_;}super._update(from, to, value);_notifyRewards(from, to, value);Because the eligibility flip is unconditional on the source of the inflow, any transfer or mint after
checkpointStartcan be used to force ineligibility and the receiver can not prevent or undo it within the same epoch.Impact: an attacker can deny rewards to targeted accounts for the entire epoch and users can unintentionally disqualify themselves by claiming during the checkpoint window.
Recommendation
Consider restricting late-entry disqualification to sources that represent intentional entry, such as primary mints from
LiquidityWindowor DEX buys identified by a trusted pair address, and ignore incidental transfers and reward mints. If the hook cannot safely infer intent from(from, to, amount), extend the hook signature to include a reason or source flag so theRewardsEnginecan apply the late-entry rule only for entry actions.Buck Labs: Fixed in commit 559b098.
Cantina: Fix verified. Commit 559b098 fixes the issue by removing late‑entry disqualification and replacing it with late‑inflow accounting, so dust transfers and reward mints no longer nuke eligibility.
RewardsEnginenow trackslateInflow/lateInflowEpochand computesearningBalance = balance - lateInflow, so any inflow during the checkpoint just becomes non‑earning for the current epoch rather than flipping eligible to false. This directly eliminates the griefing/self‑disqualification path described while still enforcing the “must hold through checkpoint” rule.Reward debt not reset on exclusion
Description
RewardsEngineusesrewardDebtas the baseline that pairs withunitsAccruedfor O(1) claims, as described in the claim formula below. WhenunitsAccruedis cleared without also clearingrewardDebt, the baseline no longer matches the units, which causes current-epoch rewards to be permanently offset until enough new units accrue to exceed the stale debt.// claim = pendingRewards + (unitsAccrued * accIndex - rewardDebt)amount = s.pendingRewards;if (s.unitsAccrued > 0 && accRewardPerUnit > 0) { uint256 currentEpochReward = Math.mulDiv(s.unitsAccrued, accRewardPerUnit, ACC_PRECISION); if (currentEpochReward > s.rewardDebt) { amount += currentEpochReward - s.rewardDebt; }}Two admin paths clear
unitsAccruedbut do not resetrewardDebt. InsetAccountExcluded, an account that is excluded has itsunitsAccruedreset whilerewardDebtis left unchanged. If that account is later re-included in the same epoch (and not late-entry disqualified), it starts accruing fresh units on top of a stalerewardDebtvalue.// setAccountExcluded: exclusion paths.unitsAccrued = 0;s.eligible = false;Similarly,
setBreakageSinkexcludes the sink and clearsunitsAccruedbut leavesrewardDebtuntouched, which creates the same accounting mismatch for that account.// setBreakageSinks.excluded = true;s.eligible = false;s.unitsAccrued = 0;Impact: accounts that are excluded and then re-included within the same epoch can be underpaid for the remainder of that epoch, with rewards effectively suppressed until the stale
rewardDebtis overcome.Recommendation
Whenever
unitsAccruedis reset as part of exclusion changes, also resetrewardDebtto keep the accounting invariant intact. Consider whetherpendingRewardsshould be preserved or cleared based on policy, but ensureunitsAccruedandrewardDebtremain consistent.Buck Labs: Fixed in commit 96a314c.
Cantina: Fix verified. Commit 96a314c fixes the issue.
RewardsEnginenow zeroesrewardDebtalongsideunitsAccruedin bothsetAccountExcludedandsetBreakageSink, preserving theunitsAccrued/rewardDebtinvariant and preventing the underpayment scenario described.Ineligible accounts can mint synthetic breakage units via post-checkpoint outflows and self-transfers
Description
If an eligible user transfers STRX tokens after the checkpoint ends, they technically forfeit their rewards until the epoch ends. This is managed by two global variables: futureBreakageUnits and totalBreakageAllTime.
After checkpointStart, late entrants are explicitly marked as
eligible = falseand do not accrue any reward units for the remainder of the epoch.However, if such an ineligible account performs an outflow after checkpointEnd (including a self-transfer), the contract executes the following logic:
uint256 futureUnits = amount * remainingSeconds;futureBreakageUnits += futureUnits;totalBreakageAllTime += futureUnits;This leads to the creation of a synthetic breakage unit, even though the user is initially ineligible for rewards because:
- The account has no future earnings to forfeit.
- No eligible units are being removed to offset the newly added breakage units.
This issue can be easily exploited through self-transfers. As a result, the rewards for honest, eligible participants could be diluted by inflating the totalUnits variables, used as a denominator in calculating elligible rewards.
PoC
As demonstrated in the provided test:
- user2 enters after checkpointStart and is correctly marked ineligible.
- After checkpointEnd, repeated self-transfers by user2 cause totalBreakageAllTime to increase monotonically.
function test_audit() external { vm.startPrank(user1); usdc.transfer(user2, 100e6); usdc.approve(address(liquidityWindow), 100e6); liquidityWindow.requestMint(address(user1), 100e6, 0, 0); vm.warp(block.timestamp + 12 days); /// CHECKPOINT STARTED = USER2 IS INELIGIBLE vm.startPrank(user2); usdc.approve(address(liquidityWindow), 100e6); liquidityWindow.requestMint(address(user2), 100e6, 0, 0); rewardsEngine._accounts(user1); rewardsEngine._accounts(user2); rewardsEngine.totalBreakageAllTime(); rewardsEngine.futureBreakageUnits(); vm.warp(block.timestamp + 16 days); for(uint256 i; i < 4; i++) { strx.transfer(user2, strx.balanceOf(user2)); } rewardsEngine.totalBreakageAllTime(); rewardsEngine.futureBreakageUnits();}Recommendation
Modify the post-checkpoint outflow logic to ensure that only eligible accounts can generate breakage. Also optionally guard the
onBalanceChange()hook against self transfers.Buck Labs: Fixed in commit 67007bb.
Cantina: Fix verified. Commit 67007bb fixes the issue by adding a self‑transfer short‑circuit in
RewardsEngineso no breakage accrues on no‑op transfers, and it gates post‑checkpoint breakage ons.eligibleso ineligible accounts can’t mint future breakage. With the existing fromEarning/late‑inflow split, late entrants only have non‑earning balance and therefore generate zero breakage.Temporal collateral ratio inflation during reward distribution
State
- Acknowledged
Severity
- Severity: Medium
Submitted by
Sujith S
Description
The
distribute()function inRewardsEngine.solcreates a temporal inconsistency in the collateral ratio (CR) calculation by depositing USDC into the reserve without immediately minting the corresponding STRX tokens.This creates a synthetic CR inflation that persists until users progressively claim their rewards, potentially allowing operations based on an artificially healthy CR that doesn't reflect the protocol's actual solvency.
The collateral ratio is calculated as:
CR = (R + HC×V) / LWhere:
- R = USDC balance in liquidityReserve
- HC×V = haircut-adjusted off-chain collateral value
- L = STRX total supply
The rewards distribution flow is:
- USDC is transferred to the reserve causing the
Rvalue to increase immediately. - The allocation of STRX tokens is determined by the CAP price, and no STRX minting takes place.
- Users claim later: STRX is only minted when users call
claim(), resulting in the increase ofLincreasing progressively as users claim
The Temporal Window: Between steps 1 and 3, there exists a period (potentially hours/days) where:
- R has increased (USDC in reserve)
- L has NOT increased (STRX supply unchanged)
- CR appears artificially inflated
This issue can cause downstream effects on the protocol's operational assumptions:
-
Incorrect Solvency Assessment: If CR is 0.99 (undercollateralized) and a reward distribution occurs, the temporary USDC influx could push CR above 1.0. The system appears healthy when it's actually undercollateralized.
-
Operational Decisions Based on False CR: PolicyManager.getCAPPrice() uses CR to determine pricing. Band transitions in PolicyManager.refreshBand() rely on CR. Distributions may be allowed/blocked incorrectly based on inflated CR.
enforceCROnClaimchecks use inflated CR for headroom calculations.
Proof of Concept
The following Proof of Concept explains how distributing rewards immediately changes the collateral ratio.
function test_audit() external { vm.startPrank(user1); usdc.transfer(user2, 100e6); usdc.approve(address(liquidityWindow), 100e6); liquidityWindow.requestMint(address(user1), 100e6, 0, 0); vm.startPrank(user1); usdc.approve(address(liquidityWindow), 100e6); vm.warp(block.timestamp + 200); console.log("-------------- TRANSFERRING TOKENS -----------------"); console.log("user1 STRX balance before transfer:", strx.balanceOf(user1)); // strx.transfer(user2, 50e18); console.log("user1 STRX balance after transfer:", strx.balanceOf(user1)); console.log("user2 STRX balance after transfer:", strx.balanceOf(user2)); usdc.transfer(distributor, 1000e6); console.log("===== BEFORE DISTRIBUTION ====="); console.log("Collateral Ratio:", collateralAttestation.getCollateralRatio()); vm.warp(block.timestamp + 30 days); vm.startPrank(distributor); usdc.approve(address(rewardsEngine), 1000e6); rewardsEngine.distribute(1000e6); console.log("===== AFTER DISTRIBUTION ====="); console.log("Collateral Ratio:", collateralAttestation.getCollateralRatio()); }The full PoC setup is shared with the project team.
Recommendation
Consider either one of the following fixes:
- Immediate Minting to Treasury: Modify RewardsEngine.distribute() to mint STRX tokens immediately and hold them in the contract's treasury
- Virtual Supply Adjustment: Modify CollateralAttestation.getCollateralRatio() to account for pending reward obligations.
Buck Labs: Acknowledged. This is true and a valid concern. We've got a few protections in place, like
blockDistributeOnDepegand caps and operationally if we were ever right on something like .99 or .98 collateralization where this sort of thing would be possible, we would deploy a buffer of capital into the reserve before doing distribute to make it a clean overcollateralization.forceAllow() can cause inclusion of a non-compliant address in the access list
State
Severity
- Severity: Medium
Submitted by
Chinmay Farkya
Description
From comments under the
STRX.sol:: _update()function, it is clear that theDenylistcould also be used to freeze accounts and prevent them from sending/ receiving STRX tokens (USDT style blacklisting), and could be used independently of the allowlist (which is based on compliance).But the logic for reinstating it using
forceAllow()is problematic.- Suppose a wallet address was not in the merkle tree (not compliant originally) and not in the allowlist
- Then it gets denylisted simply due to being a sanctioned address, to prevent transfers etc.
- Later if this address is ever removed from the denylist, it will automatically be added to the allowlist, doesn't need to go via the usual registration route.
- It automatically becomes an allowlist member, without registering via a merkle proof, thus bypassing compliance checking.
function forceAllow(address account) external onlyOwner { if (_denylisted[account]) { _denylisted[account] = false; emit Denylisted(account, false); } if (!_allowed[account]) {==> _allowed[account] = true; emit AccessUpdated(account, true, currentRootId); }This happens because
forceAllow()assumes that the address that is being reinstated was already registered before, while it could have been just blacklisted from transfers.Such an address can immediately start using
requestMint()andrequestRefund()facilities in LiquidityWindow even though they were not in the compliance list.Recommendation
Consider splitting the
forceAllow()logic into two functions : one to remove an address from denylist and other to force-include an address in the allowlist.Buck Labs: Fixed in commit b3555cd.
Cantina: Fix verified. Commit b3555cd fixes the issue by separating
forceAllow()logic into two functions.CAP price uses maximum of oracle and collateral ratio instead of minimum
State
- Acknowledged
Severity
- Severity: Medium
Submitted by
Sujith S
Description
The
getCAPPrice()function inPolicyManager.soldetermines the collateral aware peg (CAP) price by usingMath.max(oraclePrice, cr)during undercollateralized conditions (CR < 1.0).In situations where the protocol is undercollateralized (CR < 1.0), the current code chooses the greater of the oracle price and the collateral ratio:
if (oracleHealthy) { // Oracle is healthy, use max(oracle, CR) per architecture (uint256 oraclePrice,) = IOracleAdapter(oracleAdapter).latestPrice(); rawCAP = Math.max(oraclePrice, cr); }This leads the protocol to always present users with the "best" price rather than a conservative estimate, potentially enabling value extraction if either data source becomes compromised or corrupted.
Recommendation
Replace
Math.maxwithMath.minto use the more conservative price source:if (oracleHealthy) { // Oracle is healthy, use min(oracle, CR) for conservative pricing (uint256 oraclePrice,) = IOracleAdapter(oracleAdapter).latestPrice(); rawCAP = Math.min(oraclePrice, cr); }This ensures that even if one data source is compromised or erroneous, the protocol uses the lower (more conservative) price to protect reserves. The min-of-sources approach is a standard safety pattern in DeFi pricing oracles, as it prevents exploitation when any single price feed becomes unreliable.
Buck Labs: Acknowledged. There is a very good chance we switch to min instead of max with v2, but for this contract set and our initial launch, we're going to go with max. If oracle shows 0.95 but CR is 0.60, using 0.95 penalizes users beyond what the market believes the backing is worth. Or vice versa.
Low Risk8 findings
CAP pricing trusts stale oracle if strictMode off
Description
PolicyManager’s CAP pricing relies onOracleAdapter.isHealthy(maxOracleStale)to decide whether it can use the oracle price when CR is below 1.0.OracleAdapter.isHealthyshort-circuits totruewhenstrictModeis disabled, which means stale oracle prices are treated as healthy unlessstrictModehas already been toggled on viasyncOracleStrictModeor the owner.function isHealthy(uint256 maxStale) external view returns (bool) { if (!strictMode) { return true; } (uint256 price, uint256 updatedAt) = this.latestPrice(); if (price == 0 || updatedAt == 0) return false; return block.timestamp <= updatedAt + maxStale;}uint256 maxOracleStale = cr < 1e18 ? STRESSED_ORACLE_STALENESS : HEALTHY_ORACLE_STALENESS;bool oracleHealthy = IOracleAdapter(oracleAdapter).isHealthy(maxOracleStale);if (oracleHealthy) { (uint256 oraclePrice,) = IOracleAdapter(oracleAdapter).latestPrice(); rawCAP = Math.max(oraclePrice, cr);}If CR drops below 1.0 and
strictModehas not yet been enabled, the oracle is always considered healthy and can feed stale or outdated prices into CAP. This can lift the refund price above what a fresh oracle would allow under stress, which is the exact situation strict mode is intended to protect against.Impact: undercollateralized periods can use stale oracle prices to compute CAP, increasing refund prices and weakening solvency.
Recommendation
Enforce oracle freshness directly in
getCAPPrice()whencr < 1e18, independent ofstrictMode. For example, requireupdatedAt != 0andblock.timestamp - updatedAt <= STRESSED_ORACLE_STALENESSbefore usingoraclePrice, otherwise fall back tocronly.Buck Labs: Fixed in commit 9f920d5.
Cantina: Fix verified. Commit 9f920d5 fixes the issue by enforcing oracle freshness directly in
getCAPPrice()under stress, independent ofstrictMode.PolicyManagernow callslatestPrice()directly and checksupdatedAtagainstSTRESSED_ORACLE_STALENESSbefore using the oracle price; otherwise it falls back tocr. This removes the stale‑oracle bypass fromisHealthy(strictMode=false).Refund cap ignores floor depletion
State
Severity
- Severity: Low
Submitted by
r0bert
Description
The refund-cap derivation computes a cap based on
alphaBpsand available reserve above the floor. When reserves are at or below the floor, the computedcapAmountstays zero, but the function then overwrites the result with the configuredbaseAggregaterefund cap:uint256 capAmount = 0;if (!isMint) { uint256 floorAmount = Math.mulDiv(config.floorBps, L, BPS_DENOMINATOR); if (reserveBalance > floorAmount) { uint256 alphaAmount = Math.mulDiv(config.alphaBps, L, BPS_DENOMINATOR); uint256 available = reserveBalance - floorAmount; capAmount = alphaAmount < available ? alphaAmount : available; }} if (capAmount != 0) { aggregateBps = Math.mulDiv(capAmount, BPS_DENOMINATOR, L);} else { aggregateBps = 0;}if (baseAggregate != 0) { if (aggregateBps == 0 || baseAggregate < aggregateBps) { aggregateBps = baseAggregate; }}This means
aggregateBpsbecomesbaseAggregateeven when there is no reserve buffer above the floor. If the intended behavior is to hard-stop refunds when reserves are at or below the floor, the cap never tightens to zero and the refund cap check continues to allow refunds at the configured base rate.Impact: refunds are not capped to zero when reserves are at/below the floor, so the cap mechanism does not enforce a hard stop and relies on other safeguards to prevent draining.
Recommendation
If reserves at or below the floor should force a zero refund cap, explicitly return zero in that case and do not apply the
baseAggregatefallback. A minimal fix is to only apply thebaseAggregateminimum whenreserveBalance > floorAmountfor refunds.Buck Labs: Fixed in commits 049accb and 2d91ea3.
Cantina: Fix verified. Commits 049accb and 2d91ea3 fix the issue by hard‑stopping refunds returning 0 when reserves are at/below the floor.
Reserve ratio cast can wrap bands
Description
PolicyManagerderives the reserve ratio and equity buffer in basis points and stores both inuint16. The values are computed with full precision, then down-cast without clamping. If the computed basis points exceed 65535, theuint16cast wraps modulo 2^16. This can turn a very healthy reserve ratio into an arbitrary smaller number and drive band selection off the wrong threshold.uint16 reserveRatioBps = L == 0 ? 0 : uint16(Math.mulDiv(reserveBalance, BPS_DENOMINATOR, L));Because band evaluation relies on
reserveRatioBps, a wrap can misclassify the band. For example, if total supply is small and reserves are relatively high,reserveRatioBpscan exceed 65535 and wrap to a value below the 5% or 2.5% thresholds, pushing the system into Yellow or Red despite being over-reserved.Impact: incorrect band selection leads to wrong fees, spreads, caps, and governance signals in early or unusual reserve/supply conditions.
Recommendation
Clamp the computed basis points to
BPS_DENOMINATOR(or at least totype(uint16).max) before casting, or store ratios in a wider type. A minimal fix is to compute into auint256and cap at 10000 before assigning touint16.Buck Labs: Fixed in commit 3671887.
Cantina: Fix verified. Commit 3671887 fixes the issue by clamping the computed ratios before the
uint16cast.PolicyManagernow computesreserveRatioCalc/equityBufferCalcasuint256, clamps each toBPS_DENOMINATORand only then casts touint16, preventing wrap and band misclassification.Refund can round to zero and then revert
Description
LiquidityWindow.requestRefundcomputes a gross USDC amount by scaling the 18-decimal price result down to 6 decimals. For very small STRX amounts, the scaled value can round down to zero. The function does not reject this locally whenminUsdcOutis zero, so execution reachesLiquidityReserve.queueWithdrawalwith amount equal to zero and reverts withInvalidAmount. The refund always fails in this case, but the error comes from the reserve and is confusing for callers. The impact is limited to unexpected revert behavior and poor UX for small refunds.uint256 grossUsdc18 = (strxAmount * effectivePrice) / PRICE_SCALE;uint256 grossUsdc = grossUsdc18 / USDC_SCALE_FACTOR;...ILiquidityReserve(liquidityReserve).queueWithdrawal(address(this), grossUsdc);Recommendation
Add a guard after computing
grossUsdcto revert locally when the amount is zero, or introduce a dedicated error such asRefundTooSmallto make the failure mode explicit.Buck Labs: Fixed in commit 1362965.
Cantina: Fix verified. Commit 1362965 fixes the issue by adding a zero‑amount guard in
LiquidityWindow.requestRefund, so the revert happens locally and clearly when the refund rounds to zero.PolicyManager.getAggregateRemainingCapacity uses wrong caps
Description
PolicyManager.getAggregateRemainingCapacitycomputes mint and refund remaining capacity using simple BPS multipliers from the band config. For mints, a zeromintAggregateBpsis treated elsewhere as unlimited, but here it produces a zero cap untilrecordMintinitializes the cycle, so the reported remaining capacity is zero even though enforcement allows unlimited mints. For refunds, this function uses the baserefundAggregateBpsfrom config rather than the derived cap from_computeRefundCap, which incorporates reserve and floor logic, so the reported remaining capacity can disagree with actual enforcement. The impact is incorrect telemetry for keepers and front ends, which can lead to misleading UX and wrong automation decisions.uint256 mintCapTokens = Math.mulDiv(snapshot.totalSupply, config.caps.mintAggregateBps, BPS_DENOMINATOR);uint256 refundCapTokens = Math.mulDiv(snapshot.totalSupply, config.caps.refundAggregateBps, BPS_DENOMINATOR);Recommendation
Treat a zero
mintAggregateBpsas unlimited ingetAggregateRemainingCapacityby returning a sentinel such astype(uint256).maxor a separate flag. For refunds, computerefundAggregateBpsvia_computeRefundCap(snapshot)before converting to tokens so the returned remaining capacity matches enforcement.Buck Labs: Fixed in commit 897601c.
Cantina: Fix verified. Commit 897601c fixes the issue by treating
mintAggregateBps == 0as unlimited and using_computeRefundCapfor refund capacity, sogetAggregateRemainingCapacitynow matches enforcement.Inconsistent admin rights in queued withdrawals
State
Severity
- Severity: Low
Submitted by
Chinmay Farkya
Description
In
LiquidityReserve.sol, theADMIN_ROLEcan queue a request for USDC withdrawal, which can be either cancelled or executed later.The current access control model requires the
ADMIN_ROLEaddress to create such a withdrawal, and has the ability to cancel this withdrawal. But to execute it, theTREASURER_ROLEneeds to step in.function executeWithdrawal(uint256 id) external onlyRole(TREASURER_ROLE) nonReentrant whenNotPaused {As per current access model, the
ADMIN_ROLEwill be responsible to relay governance decisions. Not allowing the governance to finalise a withdrawal, while simultaneously allowing them to queue and cancel it, is inconsistent and breaks the process since a different role would need to be involved.Recommendation
Consider allowing
ADMIN_ROLEto callexecuteWithdrawal()as well.Buck Labs: Fixed in commit 293f339.
Cantina: Fix verified. Commit 293f339 fixes the issue by allowing
ADMIN_ROLEto execute withdrawals.Potential loss of trading fees if liquidity moves to multiple dexes
Severity
- Severity: Low
Submitted by
Chinmay Farkya
Description
In a previous audit, it was mentioned that
STRX trading fees are charged for only a single dexPair, and the team said they intended toadd additional dexPair addresses if meaningful pools emerge.But right now, there is no way to expand dexPairs' list (only a single address can be used), the team will need to upgrade the STRX token contract and add code instead.
In any situation when there are multiple dex pools for $BUCK, the trades will incur inconsistent fee application, which can lead to loss of trading fees for Buck Labs, and liquidity moving to different pools => thus requiring constant reconfiguration for fee exemptions and protocol-operated arbitrage bot.
Recommendation
Consider adding logic that allows admins to add new dex pools to a list, where this swap fee is applied consistently.
Buck Labs: Fixed in commit 7a3aed5.
Cantina: Fix verified. Commit 7a3aed5 fixes the issue by replacing the single
dexPairwith anisDexPairmapping and addingaddDexPair/removeDexPair, so fees apply across multiple pools.Spread calculation discrepancy in mint()/ refund()
State
- Acknowledged
Severity
- Severity: Low
Submitted by
Chinmay Farkya
Description
The spread calculation differs in mint and refund flows in
LiquidityWindow.sol.In
requestMint(), first fees is subtracted from input USDC amount, then spread is applied to calculate actual deposit value (using effectivePrice).Whereas in
requestRefund(), it is opposite => spread is applied on total requested STRX amount, and later fees is calculated using final USDC amount (spread priced-in).uint256 effectivePrice = _applySpread(params.capPrice, false, params.halfSpreadBps); if (minEffectivePrice != 0 && effectivePrice < minEffectivePrice) { revert PriceTooLow(effectivePrice, minEffectivePrice); } // Calculate USDC amount in 18 decimals, then scale down to 6 uint256 grossUsdc18 = (strxAmount * effectivePrice) / PRICE_SCALE; uint256 grossUsdc = grossUsdc18 / USDC_SCALE_FACTOR; feeUsdc = _calculateFeeAmount(grossUsdc, params.refundFeeBps); usdcOut = grossUsdc - feeUsdc;As per the above logic from
requestRefund(), the grossUsdc amount is calculated first, then feeUsdc is calculated later.The strxAmount is total requested refund amount, and it includes the fee as well. This means the spread (effectivePrice) is being applied on total amount ( actual refund order amount + fees), when calculating grossUsdc.
To put it in perspective, the spread is being applied on fee as well, which can potentially alter fee calculations slightly.
Recommendation
Consider using same math for applying spread in both mint and refund flows. The approach used in requestMint() is a better one, as spread being applied on fees does not seem logical.
Buck Labs: Acknowledged. I'm gonna accept this for v1 and address in v2, they are inverse flows + the math difference feels negligible to me to open it up right now.
Informational16 findings
Epoch rollover overpays post-dist units
Description
RewardsEnginesettles post-distribution accrual by adding units tos.unitsAccruedand baselining them withrewardDebt, which is meant to ensure those units earn zero for the rest of the epoch. This happens when an account is settled after a distribution in the same epoch.s.unitsAccrued += postDistUnits;s.rewardDebt += Math.mulDiv(postDistUnits, accRewardPerUnit, ACC_PRECISION);When the account later crosses an epoch boundary, the prior epoch is finalized by multiplying the entire
s.unitsAccruedby the prior epoch’sdeltaIndexwithout subtractingrewardDebt. That finalization path treats all accrued units as pre-distribution units, even if they were accumulated after the distribution.if (report.deltaIndex > 0 && s.unitsAccrued > 0) { s.pendingRewards += Math.mulDiv(s.unitsAccrued, report.deltaIndex, ACC_PRECISION);}deltaIndexis computed at distribution time using a denominator that only includes units up todistributionTime. If a user accrues post-distribution units and does not settle again until the next epoch, those post-distribution units remain ins.unitsAccruedand are rewarded again at rollover, which credits rewards that were never part of the allocation. This creates a deterministic overpayment path for any account that accrues after distribution and only settles in the next epoch.However, per the stated ops (monthly distribution after epoch end),
distributionTimeis capped toepochEnd, so there is no post-distribution window within the epoch. Under that timing, post-distribution units cannot exist, so this path is not exploitable in practice. The contract does not enforce that timing; if distributions are ever executed before epoch end (or if the next epoch is configured first sodistribute()happens mid-epoch), then post-distribution units can accrue and the overpay becomes real.Impact (if distribution happens mid-epoch): users can claim rewards on post-distribution time, so total claims for an epoch can exceed
tokensFromCoupon + dust, inflating STRX beyond the intended allocation.Recommendation
Ensure epoch rollover only finalizes units accrued before
distributionTime. A minimal fix is to detect accounts that already crossed the distribution (for example by checkings.lastAccrualTime >= report.distributionTime) and skip or clears.unitsAccruedbefore applyingdeltaIndexfor the prior epoch. Alternatively, track a per-account flag or separate accumulator for post-distribution units so they are never included in the prior epoch finalization.Buck Labs: Fixed in commit 3fa8c9a.
Cantina: Fix verified. Commit 3fa8c9a fixes the issue by enforcing the operational constraints at the contract level, so the mid‑epoch distribution path is no longer possible.
src/rewards/RewardsEngine.solnow reverts ifdistribute()is called beforeepochEnd (DistributionTooEarly), eliminating post‑distribution accrual within the same epoch.src/rewards/RewardsEngine.solalso preventsconfigureEpoch()from being called if the current epoch hasn’t beendistributed (MustDistributeBeforeNewEpoch), so you can’t advance the epoch and then distribute mid‑epoch.
Cap events/errors use misleading units
Description
PolicyManageroriginally emitted cap usage in basis points, but the implementation now tracks and emits actual token amounts. The event and error parameter names still indicate BPS, which can mislead indexers and monitoring tools that rely on names to interpret units.event DailyLimitRecorded(CapType capType, uint256 newAmountBps);error CapExceeded(CapType capType, uint256 requestedBps, uint256 remainingBps);These fields are now populated with token amounts rather than basis points.
emit DailyLimitRecorded(CapType.MintAggregate, newAmount);revert CapExceeded(CapType.RefundAggregate, amountTokens, remainingTokens);Impact: off-chain consumers can misinterpret units, producing incorrect dashboards or alert thresholds.
Recommendation
Rename the event/error fields to reflect token amounts or introduce new events/errors with clear units and deprecate the old ones.
Buck Labs: Fixed in commit c70b3cf.
Cantina: Fix verified. Commit c70b3cf fixes the issue by renaming the event/error fields to reflect token units.
PolicyManagerupdatesDailyLimitRecordedparameter name tonewAmountTokensand renamesCapExceeded/TransactionTooLargeparameters torequestedTokens/remainingTokens/maxAllowedTokens, matching the actual units emitted.Unused code paths and params
Description
Several contracts contain unused helpers, events, errors, and configuration that are never read or exercised by on-chain logic. These components add surface area and can mislead operators into thinking the behaviors are enforced.
PolicyManagerincludes internal helpers that are not used by any path, as well as diagnostics that are declared but never emitted. TheCapHitevent andNotAuthorizederror are never referenced andInvalidAmountis only used by an internal helper that is itself unused:function _computeMintCap(SystemSnapshot memory snapshot) internal view returns (uint256) { BandConfig memory config = _resolveActiveConfig(); return _deriveCaps(snapshot, config, true);} function _recordCounterTokens( RollingCounter storage counter, uint256 amountTokens, CapType capType, address user, uint64 cycle) internal { if (counter.capCycle != cycle) { counter.capCycle = cycle; counter.amountTokens = 0; emit CapWindowReset(user, capType); }} function _validateAmountBps(uint256 amountBps) internal pure { if (amountBps > BPS_DENOMINATOR) revert InvalidAmount();}event CapHit( CapType capType, address indexed user, uint256 attemptedAmount, uint256 remainingCapacity);error NotAuthorized();error InvalidAmount();LiquidityWindowstores anoracleAdapterand exposesconfigureOracle, but no mint/refund path readsoracleAdapter. TheOracleUnhealthyandUnauthorizederrors and theIOracleAdapterinterface are also unused.address public oracleAdapter; function configureOracle(address adapter) external onlyOwner { if (adapter == address(0)) revert ZeroAddress(); oracleAdapter = adapter; emit OracleConfigured(adapter);}error OracleUnhealthy();error Unauthorized(); interface IOracleAdapter { function latestPrice() external view returns (uint256 price, uint256 updatedAt); function isHealthy(uint256 maxStale) external view returns (bool); function setStrictMode(bool enabled) external;}AccessRegistryaccepts arootIdparameter inregisterWithProofbut does not validate or use it, so the parameter has no effect:function registerWithProof(bytes32[] calldata proof, uint64 /* rootId */) external whenNotPaused { bytes32 leaf = keccak256(abi.encodePacked(msg.sender)); require(MerkleProof.verifyCalldata(proof, merkleRoot, leaf), "AccessRegistry: invalid proof"); _allowed[msg.sender] = true; emit AccessUpdated(msg.sender, true, currentRootId);}Impact: dead code and unused parameters increase maintenance burden and create ambiguity about which guardrails are actually enforced on-chain.
Recommendation
Remove unused helpers, events, errors and parameters, or wire them into the live logic so the configured surfaces reflect actual enforcement.
Buck Labs: Fixed in commit 9687766.
Cantina: Fix verified. Commit 9687766 fixes the issue by removing the unused/dead code and parameters called out.
Redundant zero-address owner check
Description
The constructor performs an explicit zero-address check on the initial owner before invoking
Ownable's constructor.Ownablealready reverts when the owner is the zero address, so this duplicates the same validation logic:require(initialOwner != address(0), "AccessRegistry: invalid owner");Recommendation
Remove the explicit zero-address check and rely on Ownable's constructor validation or keep only one check in a single place.
Buck Labs: Fixed in commit 36c42a9.
Cantina: Fix verified. Commit 36c42a9 fixes the issue by removing the redundant
require(initialOwner != address(0))theAccessRegistry, leaving theOwnableconstructor as the single check.Dead code in multiply function
Description
The code checks for multiplication overflow by dividing the result and comparing against the original value. In Solidity 0.8+, the multiplication reverts on overflow, so the check is never reached and can never return 0.
if (result / pow != value) return 0; // overflow checkThis guard is effectively dead code and adds unnecessary operations. Impact is limited to minor gas and code clarity.
Recommendation
Remove the redundant overflow check, or if a non-reverting overflow behavior is intended, wrap the multiply in an unchecked block and document the sentinel return.
Buck Labs: Fixed in commit 6caf933.
Cantina: Fix verified. Commit 6caf933 fixes the issue by removing the unreachable overflow check in the
OracleAdaptercontract.Redundant liquidity guard in refund
Description
The refund path performs a liquidity availability check only when the USDC address is nonzero, but the same function later requires USDC to be configured before it can queue a withdrawal and transfer funds. This means the guard does not change the outcome when USDC is unset and only defers the revert until after extra computation and external calls. The
liquidityReservecheck in the condition is also redundant because that address is initialized once and is not updated during normal operation.if (usdc != address(0) && liquidityReserve != address(0)) { uint256 reserveBalance = IERC20(usdc).balanceOf(liquidityReserve); ... if (grossUsdc > availableLiquidity) { revert InsufficientLiquidity(grossUsdc, availableLiquidity); }} require(usdc != address(0), "USDC not configured");This redundancy adds an unnecessary branch and extra work on the refund path without changing behavior.
Recommendation
Move the USDC configuration requirement to the top of the refund flow and remove the conditional guard around the liquidity check so the revert path is immediate and the logic is simpler.
Buck Labs: Fixed in commit bdc8e21.
Cantina: Fix verified. Commit bdc8e21 fixes the issue by moving the USDC configuration check to the top of
requestRefundand by removing the redundant conditional guard.Redundant nonReentrant on cancel
Description
The
cancelWithdrawalfunction is markednonReentrantbut only updates storage and emits an event. It does not perform external calls, so the reentrancy guard does not provide additional protection in this context.function cancelWithdrawal(uint256 id) external onlyRole(ADMIN_ROLE) nonReentrant {Recommendation
Remove the
nonReentrantmodifier from thecancelWithdrawalfunction.Buck Labs: Fixed in commit f876cee.
Cantina: Fix verified. Commit f876cee fixes the issue by removing the
nonReentrantmodifier from thecancelWithdrawalfunction in theLiquidityReservecontract.Inconsistent token parameter naming in LiquidityWindow.initialize()
Severity
- Severity: Informational
Submitted by
Sujith S
Description
In
LiquidityWindow.sol, theinitialize()function parameter is named strc_ instead of strx_, creating inconsistency with the protocol's token naming convention (STRX) and the storage variable it populates.Recommendation
Rename the parameter to strx_ for consistency.
Buck Labs: Fixed in commit 068c636.
Cantina: Fix verified. Commit 068c636 fixes the issue by renaming the
initializeparameter (nowbuck_as part of the STRX→BUCK rename), resolving the inconsistency.Fallback oracle freshness critical in strict mode
State
- Acknowledged
Severity
- Severity: Informational
Submitted by
Sujith S
Description
When CR < 1.0 (strict mode), the system relies on oracle pricing for CAP calculations with a 15-minute staleness threshold.
If the primary oracle (Pyth) fails after operating for extended periods, the system immediately falls back to the secondary oracle. If the fallback oracle is stale (>15 minutes old), this creates an instant DoS condition where all mint/refund operations revert due to StaleOraclePrice errors.
Risk Scenario:
- CR drops to 0.95 → Strict mode activated (15-min staleness limit)
- Pyth functions normally for 24 hours
- Fallback oracle updated infrequently (every 24-48 hours)
- Pyth suddenly fails
- System falls back to stale oracle → DoS (all operations revert)
Recommendation
Implement strict operational controls and update the fallback oracle even if Pyth is functioning perfectly to avoid sudden DoS of the protocol.
Buck Labs: Acknowledged. An operational plan was established:
- Healthy mode (CR ≥ 1.0): Update fallback every 24-48 hours (within 72-hour staleness limit)
- Strict mode (CR < 1.0): Update fallback every 10-12 minutes (within 15-minute staleness limit)
- Backend service maintains fallback proactively, not reactively after Pyth failures
Redundant decimals() override in STRX Token
State
Severity
- Severity: Informational
Submitted by
Sujith S
Description
The
STRX.solcontract overridesdecimals()to return a hard-coded value of 18, which is already the default value returned by OpenZeppelin's ERC20Upgradeable parent contract.Recommendation
Remove the redundant override to reduce code surface and improve clarity.
Code cleanup
State
Severity
- Severity: Informational
Submitted by
Sujith S
Description
Few minor code quality issues were identified in
PolicyManagerandRewardsEnginecontracts:- Unused SafeCast Import: The SafeCast library is imported in
PolicyManager.solbut never used in the contract. - Outdated TODO Comment in
PolicyManager.sol. - Unused
isHealthyCollateral()andbalanceOf()function declarations in interface.
Recommendation
Consider removing the unused import, function declarations, and the outdated code documentation.
Buck Labs: Fixed in commits b88f4e6, 0c3e2f2, a24b262 and 41c8c7d.
Cantina: Fix verified. The issue was fixed by removing the unused
SafeCastimport, clean the outdated comment and drop the unused interface declarations (isHealthyCollateralandbalanceOf).Redundant treasury storage variable
Description
The
treasurystate variable inCollateralAttestation.solserves no functional purpose. The contract stores the treasury address but never uses it in any calculations or logic.Recommendation
Remove redundant treasury-related code:
- address public treasury; function initialize( address admin, address attestor, address _strxToken, address _liquidityReserve, address _usdc,- address _treasury, uint8 _reserveAssetDecimals, uint256 _healthyStaleness, uint256 _stressedStaleness ) external initializer { // ... validation ...- // Treasury can be zero initially (backwards compatibility) strxToken = _strxToken; liquidityReserve = _liquidityReserve; usdc = _usdc;- treasury = _treasury; // ... }- function setTreasury(address _treasury) external onlyRole(ADMIN_ROLE) {- address oldTreasury = treasury;- treasury = _treasury;- emit TreasuryUpdated(oldTreasury, _treasury);- }- event TreasuryUpdated(address indexed oldTreasury, address indexed newTreasury);Buck Labs: Fixed in commit 1dfc8c2.
Cantina: Fix verified. Commit 1dfc8c2 fixes the issue by removing the unused treasury state, initializer parameter, setter and event from
CollateralAttestation.sol.renounceOwnership() should be blocked to prevent accidentally losing contract ownership
State
- Fixed
PR #19
Severity
- Severity: Informational
Submitted by
Chinmay Farkya
Description
The following contracts have a defined owner address, which acts as an admin for protocol operations :
STRX.solAccessRegistry.solLiquidityWindow.solOracleAdapter.sol
If
renounceOwnership()is called by current owner by mistake, this can lead to permanent loss of ownership, which can create issues with protocol operations.Recommendation
renounceOwnership()should be overriden and changed to always revert.Buck Labs: Fixed in commit 36f7e21.
Cantina: Fix verified. Commit 36f7e21 fixes the issue by adding
renounceOwnership()overrides toAccessRegistry,LiquidityWindowandOracleAdapter.Missing events in LiquidityReserve::Initialize() function
State
Severity
- Severity: Informational
Submitted by
Chinmay Farkya
Description
The
initialize()function inLiquidityReserve.solhandles the configuration of some contract addresses, but related events are not emitted here.The following events are missing for the Treasurer address :
emit TreasurerSet(treasurer_);emit RecoverySinkSet(treasurer_, true);The following events are missing for the Liquidity Window :
emit LiquidityWindowSet(liquidityWindow_);emit RecoverySinkSet(liquidityWindow_, true);Recommendation
Consider emitting these events for consistency.
Buck Labs: Fixed in commit a40c5dc.
Cantina: Fix verified. Commit a40c5dc fixes the issue by emitting
LiquidityWindowSet,TreasurerSetandRecoverySinkSetduringLiquidityReserve.initialize.The RootUpdated event in AccessRegistry can be improved
State
Severity
- Severity: Informational
Submitted by
Chinmay Farkya
Description
In
AccessRegistry.sol, thesetRoot()function is used to push a merkle root, that helps determine which addresses are allowed to interact with the protocol.When a new root is published, the following event is emitted :
event RootUpdated(bytes32 indexed newRoot, uint64 indexed rootId);Both attestor service and owner can publish new roots, but this event does not have a mention of who published a certain rootID.
Since the team wants to keep logs of who updated what, this event should also track who actually updated the new rootID (for offchain ops).
Recommendation
Add an "attestor" address to this event definition, and log msg.sender while emitting this event in
setRoot().Buck Labs: Fixed in commit 1bb6023.
Cantina: Fix verified. Commit 1bb6023 fixes the issue by adding
updatedByto theRootUpdatedevent and emittingmsg.sender.USDC market price is not considered in minting calculations
State
- Acknowledged
Severity
- Severity: Informational
Submitted by
Chinmay Farkya
Description
In a previous audit report, risks related to USDC depeg were discussed, but there are more impacts of a potential depeg, including over-minting of STRX coin.
In
requestMint(), the USDC price is assumed to be == 1 USD, and used as is in determining collateral value.The real USDC market price is not factored in. If USDC were to depeg, any users can come to the protocol and use their USDC (at a fixed face value of $1) and use it to mint STRX.
This can lead to a death spiral for STRX as the new collateral backing (USDC) is worth less than what we assume it to be.
Over-valuing collateral is always a bad idea, especially when we are minting another token against it.
Recommendation
Consider using USDC market price to calculate real collateral value.
Buck Labs: Acknowledged. I think it's too late for us to incorporate a USDC oracle and dynamic USDC pricing into v1. It's definitely a top priority for the v2 contract ecosystem. For now, we are going to accept this risk, and if USDC depegs we are going to pause minting and redemptions.
Gas Optimizations6 findings
External self-call in isHealthy
Description
The health check calls the contract through its own external interface to fetch the latest price. This triggers ABI encoding and an external call to self, which is more expensive than a direct internal call and is unnecessary because the data is already available within the contract.
(uint256 price, uint256 updatedAt) = this.latestPrice();The behavior is correct, but the pattern adds avoidable gas overhead. Impact is limited to higher gas use for callers.
Recommendation
Expose the price function as a public view and call it directly, or add an internal helper such as
_latestPrice()and use that fromisHealthyto avoid the external self-call.Buck Labs: Fixed in commit cef9102.
Cantina: Fix verified. Commit cef9102 fixes the issue by adding an internal
_latestPrice()helper and updatingisHealthy()to call it instead ofthis.latestPrice(), removing the external self‑call overhead.Redundant intermediate variable in collateral calculations
State
Severity
- Severity: Gas optimization
Submitted by
Sujith S
Description
Three instances in
CollateralAttestation.soluse an unnecessary intermediate variable reserveUSDC before assigning toR, wasting gas on redundant memory operations./// current implementationuint256 reserveUSDC = IERC20(usdc).balanceOf(liquidityReserve);uint256 R = reserveUSDC; // or R = _scaleToEighteen(reserveUSDC, ...)Recommendation
Eliminate the intermediate variableas follows :
uint256 R = IERC20(usdc).balanceOf(liquidityReserve); uint256 R = _scaleToEighteen( IERC20(usdc).balanceOf(liquidityReserve), reserveAssetDecimals);Buck Labs: Fixed in commit d433c06.
Cantina: Fix verified. Commit d433c06 fixes the issue by removing the redundant
reserveUSDCintermediate variable inCollateralAttestation.sol.Inconsistent error handling: require vs custom errors in AccessRegistry
State
- Acknowledged
Severity
- Severity: Gas optimization
Submitted by
Sujith S
Description
AccessRegistry.soluses require statements for error handling, which is inconsistent with the rest of the codebase and consumes more gas than the custom error pattern used in other contracts.Recommendation
Consider refactoring to use custom error to increase gas efficiency and be consistent with rest of the codebase.
Buck Labs: Acknowledged.
Enums don't need to be initialized to their default value
State
Severity
- Severity: Gas optimization
Submitted by
Chinmay Farkya
Description
In
PolicyManager.sol, the initialize() function sets the value of_bandtoBand.Green.This is unnecessary because
_bandis an enum type and Green is the first entry in its definition, which means Green is anyways the default value of_band.Recommendation
Consider removing this line to save some gas.
- _band = Band.Green;Buck Labs: Fixed in commit 41c8c7d.
Cantina: Fix verified. Commit 41c8c7d fixes the issue by removing the redundant
_band = Band.Greenassignment.ReentrancyGuardTransient can be used to save gas on all nonReentrant calls
State
Severity
- Severity: Gas optimization
Submitted by
Chinmay Farkya
Description
The following contracts use
ReentrancyGuardUpgradeablefrom Openzeppelin :STRX.solLiquidityReserve.solLiquidityWindow.sol
There is a more gas efficient library for blocking reentrancies: ReentrancyGuardTransient.
Recommendation
Consider using
ReentrancyGuardTransient.Buck Labs: Fixed in commit 03050e3.
Cantina: Fix verified. Commit 03050e3 fixes the issue by migrating
LiquidityReserve,LiquidityWindowand BUCK toReentrancyGuardTransientand adding the utility implementation.Code cleanup in LiquidityWindow.sol
State
Severity
- Severity: Gas optimization
Submitted by
Chinmay Farkya
Description
The following calculation is done 3 times in the
requestMint()logic :... = usdcAmount - feeUsdc;The later two instances can be safely replaced with
netAmount, as the result ofusdcAmount - feeUsdcis already stored there.Recommendation
Consider replacing these repetitive calculations with
netAmount.Buck Labs: Fixed in commit 4c005ea.
Cantina: Fix verified. Commit 4c005ea fixes the issue by reusing
netAmountinstead of recomputingusdcAmount - feeUsdc.