Organization
- @kilnfi
Engagement Type
Cantina Reviews
Period
-
Repositories
Researchers
Findings
Medium Risk
7 findings
0 fixed
7 acknowledged
Low Risk
3 findings
0 fixed
3 acknowledged
Informational
2 findings
0 fixed
2 acknowledged
Medium Risk7 findings
Commission counts capped-exit leftovers as yield
State
- Acknowledged
Severity
- Severity: Medium
Submitted by
r0bert
Description
vPool.report()still books pulled exit-queueunclaimedFundsas__.traces.rewards. That classification is wrong for the2.1.0integrations on1.0.3core combination. These funds are not fresh staking yield. They are leftovers created when an exit ticket is capped at itsmaxExitableamount and later gets matched against a richer cask in the exit queue.The issue becomes integration-visible because
MultiPooltreats any growth in pool value beyond injected principal as fee-bearing performance._integratorCommissionEarned()does that directly:function _integratorCommissionEarned(PoolInfo memory pool) internal view returns (uint256) { uint256 stakedPlusExited = _stakedEthValue(pool) + $exitedEth.get()[pool.id]; uint256 injected = $injectedEth.get()[pool.id]; if (injected >= stakedPlusExited) { return 0; } uint256 rewardsEarned = stakedPlusExited - injected; return LibBasisPoints.compute(rewardsEarned, $fees.get()[pool.id]);}Therefore, once
vPoolreinjects capped-exit leftovers and labels them asrewards,Native20and the otherMultiPool20descendants start accruing integrator commission on value that did not come from validator performance.This matters because the leftover value should simply return to the pool and benefit the remaining holders. It should not be turned into commission-bearing revenue for the integrator. The user-visible effect is a silent fee leak from capped exits into integrator commission accounting, even if the report itself contains no new validator rewards.
The easiest way to see the problem is with a realistic example. Assume a
Native20instance has a 10% integrator fee and currently tracks100 ETHof underlying value. Alice requests an exit worth40 ETH, so her ticket is capped at40 ETHthroughmaxExitable. Before she claims, the exit queue later receives enough ETH that the same shares would now settle for41 ETH. Alice is still limited to40 ETH. The extra1 ETHis moved into the exit queue'sunclaimedFundsbuffer because it is no longer claimable by the ticket holder. On the next report, the old core pulls that1 ETHback into the pool and adds it to__.traces.rewards.MultiPoolthen interprets that1 ETHas reward-like growth and accrues0.1 ETHof commission, despite the fact that validators did not earn1 ETHand no new external reward entered the system. The value came from the capped exit settlement itself.The regression test below reproduces the behavior directly. It creates an unclaimed-funds buffer in the exit queue, submits a zero-reward report and shows that
integratorCommissionEarned()becomes positive only because those capped-exit leftovers were reinjected and counted asrewards.// Add to: test/integrations/MultiPool20.t.solfunction test_report_pulledUnclaimedFundsIncreaseIntegratorCommission() external { uint256 amount = 64 ether; uint256 exitAmount = 32 ether; uint256 unclaimedDelta = 1e12; address staker = makeAddr("staker1"); stake(staker, amount); oracles__reportToCommit(op, amount); poolAdmin__bootstrapValidatorSet(o, op); { ctypes.ValidatorsReport memory rep = oracles__warpAndForwardReport(op, op.pool.lastReport()); oracles__report(op, rep); } vm.prank(staker); mp.requestExit(exitAmount); uint256 ticketId = op.exitQueue.ticketIdAtIndex(0); ctypes.Ticket memory ticket = op.exitQueue.ticket(ticketId); expect(ticket.maxExitable).toEqual(exitAmount, "unexpected capped exit value"); vm.deal(address(op.pool), exitAmount + unclaimedDelta); vm.prank(address(op.pool)); op.exitQueue.feed{value: exitAmount + unclaimedDelta}(ticket.size); uint256[] memory ticketIds = new uint256[](1); ticketIds[0] = ticketId; uint32[] memory caskIds = new uint32[](1); caskIds[0] = 0; vm.prank(staker); op.exitQueue.claim(ticketIds, caskIds, 0); expect(op.exitQueue.unclaimedFunds()).toEqual(unclaimedDelta, "unclaimed funds buffer not created"); expect(mp.integratorCommissionEarned(POOL_ID)).toEqual(0, "commission should stay zero before the pull-back report"); uint256 underlyingBefore = mp.totalUnderlyingSupply(); { ctypes.ValidatorsReport memory rep = oracles__warpAndForwardReport(op, op.pool.lastReport()); oracles__report(op, rep); } expect(op.exitQueue.unclaimedFunds()).toEqual(0, "report should pull the unclaimed funds buffer"); expect(mp.totalUnderlyingSupply()).toBeGreaterThan( underlyingBefore, "integrator underlying should increase when unclaimed funds are reinjected" ); expect(mp.integratorCommissionEarned(POOL_ID)).toBeGreaterThan( 0, "integrator commission should only increase because the old core counts unclaimed funds as rewards" );}Recommendation
Backport the upstream
2.1.0fix and stop adding pulled exit-queue unclaimed funds to__.traces.rewards. These funds can still be added to__.traces.delta, since they do increase pool value, but they should not be treated as fee-bearing yield.The smallest safe change is to keep the pull-back accounting and remove the reward increment:
uint256 pulledAmount = _pullExitQueueUnclaimedFunds(__.increaseCredit);__.increaseCredit -= pulledAmount;__.traces.pulledExitQueueUnclaimedFunds = uint128(pulledAmount);__.traces.delta += int128(uint128(pulledAmount));If the core cannot be changed, the fallback is to exclude this source from integration commission calculations. That is a weaker fix because it spreads special-case logic into the integrations. Aligning
vPoolwith the upstream2.1.0behavior is cleaner and fixes everyMultiPool20descendant at once.Kiln: Acknowledged. Known 1.0.3 core behavior, fixed in 2.1.0 core. This is an accepted limitation of running 2.1.0 integrations on the live 1.0.3 core. The pools on mainnet have and always had 0 as operatorFee meaning this path was never encountered.
maxExitable = 0 does not freeze exit-queue payouts
State
- Acknowledged
Severity
- Severity: Medium
Submitted by
r0bert
Description
vPool.report()still processes the exit queue whenever it holds shares, even if the oracle setsrprt.maxExitableto zero. The code computesexitDemand, burns exit-queue shares and callsfeed()without checking thatmaxExitableis greater than zero.This matters because upstream
2.1.0usesmaxExitable = 0as a special emergency value to stop cask creation during slashing or other stress events. On this branch, that stop does not work. Earlier queued exiters can still be paid from newly exited ETH or exit-boost ETH while the oracle is explicitly trying to freeze payouts.For example, assume the queue already holds a large pending exit and the next report also includes fresh exited ETH from validators. If the oracle sets
maxExitable = 0to keep funds inside the pool until the slashing window is clear, the current code still burns the queued shares and sends ETH into the exit queue. Consequently, users who are already in the queue can leave before the loss is fully socialized and the remaining holders absorb more of the eventual damage.Recommendation
Backport the upstream
2.1.0guard and skip exit-queue processing whenrprt.maxExitable == 0.The smallest safe change is to gate the block like this:
if (exitQueueBalance > 0 && rprt.maxExitable > 0) { ...}The local tests and compatibility notes should also be updated to reflect that
maxExitable = 0is an emergency stop for exit-queue payouts, not just a limit on future validator exit requests.Kiln: Acknowledged. Known 1.0.3 core behavior, fixed in 2.1.0 core. The 1.0.3 core only checks
exitQueueBalance > 0before processing the exit queue. The 2.1.0 core guards withexitQueueBalance > 0 && rprt.maxExitable > 0. The 1.0.3 core does not treatmaxExitable = 0as an emergency stop. Accepted as a known limitation.Non-activating validators stay fully priced at 32 ETH
State
- Acknowledged
Severity
- Severity: Medium
Submitted by
r0bert
Description
_totalUnderlyingSupply()treats every purchased-but-not-yet-activated validator as if it were still backed by a full32 ETH. That is reasonable while activation is merely pending. It becomes wrong once a purchased validator never activates. This branch omitted the upstream2.1.0invalid-activation reporting and coverage mechanism, so there is no way to stop counting that missing validator as real backing.This matters because the pool can continue reporting a par rate even after the backing is gone. A purchased validator consumes
32 ETHfrom the pool at deposit time. If that validator later turns out to be invalid and never activates, the pool balance is still gone, but_totalUnderlyingSupply()keeps adding the missing32 ETHback through the purchased-versus-activated validator gap. The loss is hidden instead of crystallized.That hidden loss becomes user-visible as soon as other users interact with the pool. A realistic example is a pool where Alice funded one validator, the validator never activates and the pool still reports
rate() == 1e18. Bob then deposits a fresh32 ETHand mints at par because the pool still looks fully backed. Alice can transfer her shares to the exit queue and exit for a full32 ETH, funded by Bob's fresh deposit. Bob is left holding shares that still look fully backed on-chain even though the pool has already lost the first32 ETH.Recommendation
Backport the upstream invalid-activation handling so the oracle can report non-activating validators separately and the pool can require explicit coverage before those validators continue to count in underlying supply.
If the full
2.1.0mechanism cannot be backported, the minimum safe rule is to stop counting permanently non-activating purchased validators as32 ETHof backing without an explicit replacement or coverage step. Otherwise the pool can keep socializing a hidden loss onto later depositors and remaining holders.Kiln: Acknowledged. Known 1.0.3 core behavior, fixed in 2.1.0 core. The 1.0.3 core has no
invalidActivationCountreporting or coverage mechanism._totalUnderlyingSupply()counts every purchased-but-not-activated validator at 32 ETH indefinitely. In practice, validator activation is monitored operationally and invalid activations have not occurred on the live deployment.report() can over-request validator exits and block later queue funding
State
- Acknowledged
Severity
- Severity: Medium
Submitted by
r0bert
Description
vPool.report()requests more exits wheneverexitDemandis higher thanexitingProjection, but it never caps the new requests by the number of validators that are actually activated. The downstreamvFactory.exitTotal()call only clamps against funded validators. It does not clamp against activated validators. ThereforerequestedExitscan become larger than the number of validators that can really exit.This matters because
report()later reusesrequestedExitsas if it were real future liquidity. At the following lines:// src/vPool.sol#956-961// ----- We compute the exiting projection, which is the amount of ethers that is expected to be exited soon //// This amount is based on the exiting amount, which is the amount of eth detected in the exit flow of the //// consensus layer, and the amount of unfulfilled exit requests that are expected to be triggered by the //// operator soon //__.traces.exitingProjection = rprt.exiting;uint256 currentRequestedExits = $requestedExits.get();if (currentRequestedExits > rprt.stoppedCount) { __.traces.exitingProjection += uint128((currentRequestedExits - rprt.stoppedCount) * LibConstant.DEPOSIT_SIZE);}any
requestedExits - stoppedCountgap is converted back intoexitingProjectionat32 ETHper validator. Once the stored request count is impossible, the pool starts assuming exit ETH is on the way when it is not. That false projection then suppresses exit-queue funding from fresh deposits and keeps queued exits pending longer than necessary.For example, a pool that has purchased two validators, but only one has actually activated. If users request enough exits, the current code can still store
requestedExits = 2. On the next report the pool treats both exits as pending future liquidity, even though only one validator can really leave. If a new depositor adds64 ETHin the meantime, that deposit can remain idle indepositedinstead of being used as exit-boost liquidity for the queue. The queue is delayed only because the accounting assumes a second exiting validator that does not exist.Recommendation
Backport the upstream
2.1.0cap before forwarding the request to the withdrawal recipient.newExitRequestsshould never exceedrprt.activatedCount - currentRequestedExits. The smallest safe change is:uint256 newExitRequests = LibUint256.ceil(LibUint256.min(__.exitDemand - __.traces.exitingProjection, rprt.maxExitable), LibConstant.DEPOSIT_SIZE);newExitRequests = LibUint256.min(newExitRequests, rprt.activatedCount - currentRequestedExits);That keeps
requestedExitsaligned with the oracle-reported activated validator set and preventsexitingProjectionfrom being inflated by exits that cannot happen.Kiln: Acknowledged. Known 1.0.3 core behavior, fixed in 2.1.0 core. The 1.0.3 core does not cap
newExitRequestsbyactivatedCount. The 2.1.0 core adds:newExitRequests = LibUint256.min(newExitRequests, rprt.activatedCount - currentRequestedExits). Accepted as a known limitation. The downstreamvFactory.exitTotal()provides a partial clamp against funded validators, limiting the practical blast radius.Global member ejection does not fully remove the global oracle's voting weight
State
- Acknowledged
Severity
- Severity: Medium
Submitted by
r0bert
Description
setGlobalMemberEjectionStatus(true)only flips the ejection boolean and emitsSetGlobalMemberEjectionStatus. It does not clear the current reporting state if the global oracle already voted.submitReport()also keeps granting the implicit global vote whenevermsg.sender == _globalOracleMember(), even after ejection, as long as that address is also present in the local member list.This means ejection mode does not actually deliver the quorum reduction it advertises. A stale global vote cast before ejection can still count after ejection. A global oracle that is also a listed member can also keep contributing two votes after ejection. In both cases, a report can finalize with fewer physical signers than the post-ejection quorum intends.
This matters because ejection mode is supposed to remove the global oracle from the active voting set once there are enough local members. On this branch, the protocol can say the global member is ejected while still inheriting its old vote or its implicit extra vote.
Recommendation
Backport the upstream
2.1.0ejection handling. Ejection should use a real_globalMemberEjected()predicate, clear current reporting state if the global bit already voted and suppress the implicit global vote whenever the global member is considered ejected.The minimum safe change is to clear the current vote tracker and variant counts when ejection becomes active after a global vote and to stop adding the implicit global vote to
voteCountonce ejection is active.Kiln: Acknowledged. Known 1.0.3 core behavior, fixed in 2.1.0 core. The 1.0.3
setGlobalMemberEjectionStatus()only flips the boolean and emits an event. It does not clear reporting state or suppress the implicit global vote. The 2.1.0 core adds_globalMemberEjected()helper, state clearing on ejection, and gates vote registration / counting / emission on ejection status. Accepted as a known limitation.Rotating the global oracle preserves stale votes and blocks the new global oracle
State
- Acknowledged
Severity
- Severity: Medium
Submitted by
r0bert
Description
vOracleAggregatorrepresents the implicit global oracle in vote-tracker bit0. The Nexus can later change the global oracle address throughsetGlobalOracle(), but the aggregator does not clear its current reporting state when that happens. As a result, bit0is reused across two different identities.If the old global oracle already voted for the current epoch, the new global oracle inherits that stale bit and is treated as
AlreadyReported. The old vote is still counted even though the underlying identity changed. This creates two problems at once. The current epoch can keep counting influence from an address that is no longer the global oracle. It can also deadlock report finalization if the new global oracle vote is required to reach quorum.Recommendation
Clear the aggregator's reporting state whenever the corresponding Nexus global oracle address changes.
If that is not feasible through the current architecture, stop keying implicit-global participation off a bare bit position that survives identity rotation. The voting state should either be reset on rotation or tied to the current global-oracle address rather than to a fixed slot alone.
Kiln: Acknowledged. Known 1.0.3 core behavior.
Nexus._setGlobalOracle()does not clear aggregator reporting state on rotation. If the old global oracle already voted for the current epoch, the new oracle inherits a stale bit and is treated asAlreadyReported. Neither 1.0.3 nor 2.1.0 clear aggregator state directly from the Nexus setter, though 2.1.0's improved ejection handling mitigates the impact. Oracle rotation is an infrequent admin action typically performed between epochs when the vote tracker is already cleared. Accepted as a known limitation.vTreasury clears the wrong pool share balance
State
- Acknowledged
Severity
- Severity: Medium
Submitted by
r0bert
Description
vTreasuryrecords received pool shares under the pool address inonvPoolSharesReceived, sincemsg.senderis the pool when shares are transferred to the treasury.exitSharesalso reads the pending balance with that same pool key before calling_exitAndFundCoverageFund._exitAndFundCoverageFundthen clears$poolShareswithmsg.sender.k(). At that pointmsg.senderis the operator or the global recipient, not the pool whose shares are being exited. Therefore the real pool entry is left unchanged even though the treasury has already transferred the underlying shares out.The impact is not theft. The impact is that treasury accounting for one pool can become stuck in a permanently inflated state after the first successful
exitShares(pool)call. Once that happens, later attempts to exit or withdraw shares for that pool can revert because the treasury believes it owns more shares than it actually holds. Consequently, operator commission distribution and any auto-cover handling that depends on those exits can stop working for the affected pool until storage is repaired.For example, assume the treasury receives
100shares from poolP. It stores that amount under$poolShares[P]. The operator then callsexitShares(P), which transfers the real100shares out, but the contract clears$poolShares[operator]instead of$poolShares[P]. The treasury now holds0real shares fromP, while bookkeeping still says it holds100. If the treasury later receives20more shares fromP, the real balance becomes20but the stored balance becomes120. A newexitShares(P)call will try to transfer120shares even though only20exist in the treasury, so the call reverts. The100share mismatch remains forever unless someone repairs storage.Recommendation
Clear the pool-keyed entry instead of the caller-keyed entry. The smallest safe change is to reset
$poolShareswithaddress(pool).k()before transferring any shares out.$poolShares.get()[address(pool).k()] = 0;If this function has already been used on live deployments add a one-off repair step to fix any stale
$poolSharesentries before relying on future exits or withdrawals.Kiln: Acknowledged. Known 1.0.3 core behavior, fixed in 2.1.0 core. Accepted as a known limitation. The operator fee on the live pools is set to 0%,
vTreasuryare not used in the current flows, so these have no practical impact on the live deployment.
Low Risk3 findings
vTreasury keeps the previous operator's fee vote after operator rotation
State
- Acknowledged
Severity
- Severity: Low
Submitted by
r0bert
Description
vTreasurystores fee votes as raw values with an active bit. It does not store which operator cast the vote. WhensetOperatorupdates the operator address,_setOperatoronly writes the new address and emitsSetOperator. It does not clear an already active operator vote.Consequently, an outgoing operator can leave a live vote behind. The current global recipient can later cast the same fee value and finalize the treasury fee even though the current operator never agreed. This is a governance-integrity issue for treasury fee changes. It does not directly expose user balances, but it means operator rotation does not revoke pending fee influence from the old operator.
Recommendation
Clear treasury vote state whenever the operator changes.
function _setOperator(address newOperator) internal { LibSanitize.notZeroAddress(newOperator); $operator.set(newOperator); emit SetOperator(newOperator); $operatorFeeVote.set(0); $globalRecipientFeeVote.set(0);}If you want a stronger fix, bind stored votes to the voter identity so a role rotation automatically invalidates votes cast by the previous role holder.
Kiln: Acknowledged. Known 1.0.3 core behavior, fixed in 2.1.0 core. Accepted as a known limitation. In 2.1.0.
_setOperator()at line 281 clears votes if an operator vote exists:if ($operatorFeeVote.get() > 0) { _clearVotes(); }. The operator fee on the live pools is set to 0%,vTreasuryare not used in the current flows, so these have no practical impact on the live deployment.vTreasury keeps the previous global recipient's fee vote after Nexus rotation
State
- Acknowledged
Severity
- Severity: Low
Submitted by
r0bert
Description
voteFeestores the global recipient vote as a raw fee value with an active bit. It does not bind that vote to a specificglobalRecipientaddress. Authorization is checked against the currentNexus.globalRecipient()value at call time, but the stored vote itself remains in the treasury whenNexus.setGlobalRecipientrotates the role.Consequently, a previous global recipient can preload a vote, get rotated out, and still influence a later fee update. The current operator can match that stale value and finalize the treasury fee even though the current global recipient never agreed. This is a governance-integrity issue around treasury fee changes rather than a direct loss of user funds.
Recommendation
Clear treasury vote state whenever
Nexus.globalRecipientchanges, or store the address associated with each live vote and reject it if that address is no longer the current global recipient.If the treasury is expected to keep using role-based voting, consider adding a reset hook on global-recipient rotation so old pending votes cannot survive the role change.
Kiln: Acknowledged. Known limitation, not fixed in any version. Valid observation. Neither 1.0.3 nor 2.1.0 clear treasury vote state on global recipient rotation —
Nexus._setGlobalRecipient()only writes the new address and emits the event. A previous global recipient's vote can survive rotation and be matched by the current operator to finalize a fee change. However, the practical risk is limited: it requires the current operator to independently and deliberately match the stale vote value and the impact is constrained to treasury fee governance, not user funds. The operator fee on the live pool is set to 0% and the treasury is not used in the current integration flow.vTreasury lets authorized callers withdraw pool shares without funding autoCover
State
- Acknowledged
Severity
- Severity: Low
Submitted by
r0bert
Description
vTreasuryonly appliesautoCover(pool)inside_exitAndFundCoverageFund, where part of the operator's shares are redirected to the pool's coverage recipient before the rest is sent to the exit queue.withdraw(token)does not enforce that rule. Iftokenis a pool share token, the function simply splits the entire treasury balance between the global recipient and the operator. Consequently, an authorized caller can bypass operator-configured coverage funding by choosingwithdraw(address(pool))instead ofexitShares(pool). The coverage recipient receives nothing even whenautoCover(pool)is non-zero. This weakens the intended slashing buffer for that pool and lets a fee recipient skip the coverage routing that the treasury configuration was supposed to enforce.Recommendation
Do not allow direct withdrawal of treasury-held pool shares. Force that case through
exitShares(pool), whereautoCoveris applied.function withdraw(address token) external onlyOperatorOrGlobalRecipient { LibSanitize.notZeroAddress(token); if ($poolShares.get()[token.k()] > 0) { revert CannotWithdrawPoolShares(token); } ...}This matches the local
2.1.0behavior and preserves the intended coverage routing for treasury pool shares.Kiln: Acknowledged. Known 1.0.3 core behavior, fixed in 2.1.0 core. Accepted as a known limitation. The operator fee on the live pools is set to 0%,
vTreasuryare not used in the current flows, so these have no practical impact on the live deployment.
Informational2 findings
The global oracle can submit a report alone when it is the only listed member
State
- Acknowledged
Severity
- Severity: Informational
Submitted by
r0bert
Description
vOracleAggregatortreats the contract as ready as soon asmembers.length > 0. It also givesmsg.senderone vote as a local member and one more vote if the same address is also the Nexus global oracle. Therefore, if the only listed member is the global oracle itself, that one address gets2votes against a quorum of2.This breaks the intended quorum model. The report is supposed to require some non-global corroboration before it reaches the pool. On this branch, the global oracle can satisfy the entire quorum alone by being added as the only listed member.
Upstream
2.1.0explicitly lists this as an audit fix.Recommendation
Backport the upstream
2.1.0readiness rule so a sole listed member only makes the aggregator ready when that member is not the global oracle.If that full backport is not taken, the minimum safe fix is to reject the configuration where the global oracle is the only effective member in the voting set. The aggregator should never let one address satisfy both the local-member vote and the global-member vote with no other signer participating.
Kiln: Acknowledged. Known 1.0.3 core behavior, fixed in 2.1.0 core. The 1.0.3
_ready()only checksmembers.length > 0, allowing a sole member who is also the global oracle to satisfy quorum alone with 2 votes. The 2.1.0 core's_ready()rejects this configuration: it returns false when the sole member is the global oracle. Accepted as a known limitation.Bare Native20 upgrade locks users out until rights are re-seeded
State
- Acknowledged
Severity
- Severity: Informational
Submitted by
r0bert
Description
The new
Native20implementation starts enforcingAccountListrights immediately. When an account has no explicit rights entry,_getRights()falls back todefaultRights._checkNotForbiddenAndAuthorizations()then reverts if the required bit is missing.This becomes an upgrade issue because the live mainnet deployments still have
defaultRights == 0. After onlyupgradeTo(newImplementation), ordinary users no longer satisfy the new authorization checks.MultiPool20._stake()starts failing for fresh users, andMultiPool20._requestExit()starts failing for existing holders for the same reason. The live-fork reproduction intest/e2e/Native20.upgrade.v2_1_0.research.t.solshows this on all 24 listed mainnetNative20proxies: a user can stake before the upgrade, but after only the implementation swap a new user reverts onstake()and the pre-upgrade holder reverts onrequestExit()withMissingAuthorizations.This matters because restoring access is not part of the same privileged action.
upgradeTo()is restricted to the ERC1967 proxy admin, whilesetDefaultRights()is restricted to the separate integration admin stored in contract state. Therefore the rollout is not a safe standalone implementation swap. There is a user lockout window until the second admin action completes. The existing fork test hides this because it always callssetDefaultRights(STAKE | REQUEST_EXIT)before checking any user behavior.Recommendation
Do not treat
upgradeTo(newImplementation)as a safe standalone rollout.If current user stake and exit access must be preserved, seed the intended default rights atomically during the upgrade, for example through an upgrade reinitializer or
upgradeToAndCall(). If that is not possible, make thesetDefaultRights()transaction an explicit required part of the rollout procedure and document that users are locked out until it executes.Kiln: Acknowledged. Valid observation. The recommendation to use
upgradeToAndCall()to seed default rights atomically is acknowledged but breaks the separation of concerns between admins and would require extra code for a one time action. Upgrade is a one time action, both actions can be coordinated fairly easily, at block n and n+1 for example, which is far simpler and requires no extra code while having virtually no impact on users.