Organization
- @Espresso-Systems
Engagement Type
Cantina Reviews
Period
-
Repositories
N/A
Researchers
Findings
Medium Risk
5 findings
3 fixed
2 acknowledged
Low Risk
15 findings
14 fixed
1 acknowledged
Informational
2 findings
0 fixed
2 acknowledged
Medium Risk5 findings
Rogue Key Attacks Against BLS Aggregate Signature Verification
State
- Acknowledged
Severity
- Severity: Medium
Submitted by
Blockdev
Description
The approach taken in this implementation is secure if either (1) the aggregated messages are unique, or (2) verification keys are checked to have a proof of knowledge of the secret signing key. Otherwise, the scheme would be susceptible to "rogue key attacks".
Recommendation
In addition to proofs of knowledge, there are a few other variants of BLS aggregate/multi signatures that protect against rogue key attacks. See the resources below for more information on options:
Cantina
The client assured the team that in the larger Espresso system at another stage, BLS verification keys are being validated using proof-of-possession.
Signatures Validated to be On-Curve
State
Severity
- Severity: Medium
Submitted by
Blockdev
Description
For all verification methods,
verify
,aggregate_verify
, andmulti_sig_verify
, if the signature is not checked to be on-curve (or in the appropriate large prime-order) subgroup of the elliptic curve then various “low-order” attacks can be used to leak information about the secret key.Recommendation
For BN254, a subgroup check in G1 is not needed, but an inexpensive on-curve check is still important.
Software Execution Timing Side Channel of Secret Signing Key
Description
The group scalar multiplication that makes up the core cryptographic operation of BLS signature is implemented using the
arkworks
library which currently does not support constant-time elliptic curve / field arithmetic. Fine-grained timing of this operation can lead to leakage of the bits of the singing key.Recommendation
If the Espresso system admits a channel for untrusted users to time this operation, consider switching to a constant-time group scalar multiplication implementation.
Parameters and Payload Length Uncommitted Leading to Ambiguous Recover Behavior
State
Severity
- Severity: Medium
Submitted by
Blockdev
Description
Certain metadata like
param
including therecovery_threshold
andtotal_weights
and, more importantly,payload_byte_len
are not part of the commitment. Instead these values are passed along in an unauthenticated manner as part of the shares. For example, metadata likepayload_byte_len
andnum_polys
are read from the zeroth share.Consider the following concern: if
payload_byte_len
is not committed does that mean that the data can be arbitrarily truncated during decoding which might have implications for downstream execution? Currently, this function allows for arbitrary truncation.Even without arbitrary truncation, there is a concern around ambiguous truncation of padded payloads. When the payload is not a multiple of
field_bytes_len
, the encoding pads with zeros during dispersal (See Line 182). Currently, the unauthenticatedpayload_byte_len
is the only way to tell how to recover the correct data, i.e., truncate the correct number of padded zeros.Recommendation
The following can be implemented to prevent ambiguous recovery: Adding
params
andpayload_byte_len
to the commitment. Verifying that the shares include vectors with the appropriate number of evaluations (num_polys
evaluations in each share vector wherenum_polys
is computed frompayload_byte_len
andrecovery_threshold
) in verify_share.Cantina
The client informed us that it was undesirable to change the commitment due to downstream serialization issues. The client opted to change the padding to a deterministic variable length padding scheme so that truncation is unambiguous. Depending on the trust model of how the VID scheme is being used within the larger Espresso system, there may still be concerns around parameters and payload length not being authenticated. The team is unable to verify without a broader review of the Espresso system.
Merkle Path Verification Can Equivocate on Depth of Proof
State
Severity
- Severity: Medium
Submitted by
Blockdev
Description
Verification of Merkle tree proofs only check up to proof length. Importantly, this means that proofs for leaves can verify even if the path is not of height depth. This can be problematic as it may be possible to have multiple values verify for a specific position at different heights of the path.
Take, for example, the
FixedLenPoseidon2Hash
DigestAlgorithm
(implemented in prelude.rs Line 60). Thedigest_leaf algorithm
places the leaf element in the second to last position and the position in the last position: Consider a node at depth i with proof forleaf_val
at depth i+1:node = (default…, leaf_val, pos)
You could produce a proof of this
leaf_val
as the value for positionpos
.However, if i+1 is not max depth, then it may be possible to produce another valid proof for
pos
. Consider:leaf_val = H(default…, leaf_val’, pos)
Then one could produce another accepting proof for
leaf_val'
at depth i+2 forpos
(ifpos
traversal path matched up with second to last position) for node:node = (default…, leaf_val', pos)
Recommendation
There are a couple options that can be taken to mitigate:
- (Option 1) Enforce that all leaf nodes for successful lookups appear at max depth equals
height
. - (Option 2) Set a different hashing approach for leaf nodes so that they cannot be confused for internal nodes, e.g., enforce leaves are hashed as:
H(“LEAF”, default…, leaf, pos)
, and internal nodes are hashed asH(“INTERNAL”, …)
Low Risk15 findings
Domain Separation for Hash Function
State
- Acknowledged
Severity
- Severity: Low
Submitted by
Blockdev
Description
Domain separation in hash functions that are reasoned as random oracles during security proofs can be useful to prevent certain types of replay attacks within complex systems. Here, a cipher suite identifier is used as a domain separator.
Ethereum, for example, uses the following to calculate the domain hash (ref)
The domain, in turn, is calculated by the compute_domain() function which combines one of ten domain types with a mash-up of the fork version and the genesis validators root.
Recommendation
A more expressive domain separation functionality including context of signature use may be appropriate depending on the primitive’s use within the larger Espresso system.
Poseidon2 Hash Structs not bound to use Poseidon2
Description
In
FixedLenPoseidon2Hash
andVariableLenPoseidon2Hash
, the documentation misleadingly states that this sponge-based CRHF uses the Poseidon2 permutation. However, there is nothing about these structs that require use of the Poseidon2 permutation. It will use the Poseidon2 permutation only if it is parameterized by a Poseidon2 SpongeS
.Recommendation
To prevent misuse of these primitives, i.e., a developer accidentally instantiating the CRHF with a different sponge, add a dummy trait
PoseidonSponge
and requireS: Sponge<U = F> + PoseidonSponge
. Add dummy implementations ofPoseidonSponge
to existing implementations.Different from 4x4 MDS Matrix from Reference Implementation
Severity
- Severity: Low
Submitted by
Blockdev
Description
The 4x4 MDS matrix used here differs from that of the Horizen Labs reference implementation. Instead, it matches a different MDS matrix choice made by Plonky3. This means that hash evaluations will not match that of Horizen Labs.
Recommendation
If matching the reference is desired, switch the encoded MDS matrix as noted.
Different Behavior for Length 4 States from Reference Implementation
Severity
- Severity: Low
Submitted by
Blockdev
Description
When state length
T=4
, this implementation applies the 4x4 matrix M and then completes the circulant matrix (doubling the state). This matches the Plonky3 implementation, but differs from the Horizen Labs reference implementation and the paper's description (Sec 5.1 of https://eprint.iacr.org/2023/323.pdf), in which forT=4
, the circulant matrix is not applied.Recommendation
If matching the reference is desired, switch the behavior for
T=4
as noted.Out-of-bounds Check on Shares Provided During VID Recover
State
Severity
- Severity: Low
Submitted by
Blockdev
Description
The
shares
vector is indexed without checking if it is empty.Recommendation
Include bounds check.
Out-of-bounds Check on Payload Provided During VID Recover
State
Severity
- Severity: Low
Submitted by
Blockdev
Description
The
payload
vector is indexed without checking if it is empty.Recommendation
Include bounds check.
Out-of-bounds Check if Range and Payload Length Do Not Match in Recover
State
Severity
- Severity: Low
Submitted by
Blockdev
Description
The
payload
vector is indexed by the input range and also by assuming it is of lengthnum_polys
. If either of these assumptions are wrong, an out-of-bounds panic will occur.Recommendation
Include bounds check.
Summation of Weights May Overflow During VID Recover
State
Severity
- Severity: Low
Submitted by
Blockdev
Description
The
collected_weights
summation may overflow from adversarial input. Specifically,.sum()
may overflow:# Panics////// When calling `sum()` and a primitive integer type is being returned, this/// method will panic if the computation overflows and debug assertions are/// enabled.
Recommendation
Include overflow check during summation.
Validity Check for Collected Weights does not Consider Overlapping Ranges
State
Severity
- Severity: Low
Submitted by
Blockdev
Description
For recover to succeed, the number of unique collected weights must be greater than the recovery threshold. The current check does not check the uniqueness condition which would be violated for overlapping weight ranges.
Recommendation
Include a uniqueness check or non-overlapping check.
Memory Overflow from Malicious Range Input
State
Severity
- Severity: Low
Submitted by
Blockdev
Description
A vector of length the size of weight of share is created during
recover
. If the input weight range is chosen maliciously to be large, memory can be overloaded.Recommendation
Perform some sanity bound-checking on range lengths.
Test Case Does Not Properly Unwrap Result
State
Severity
- Severity: Low
Submitted by
Blockdev
Description
The assertion in the test case checks if the result
is_ok()
. However, a failed verification will still return an ok result.Recommendation
Check whether the value inside of the first ok result is ok to determine whether verification succeeded or failed.
Out-of-bounds Check for Fixed Length Hash Input Size
Description
Implementation of
DigestAlgorithm
forFixedLenPoseidon2Hash
assumesINPUT_SIZE >= 2
and indexes into array without checking bounds.Recommendation
Include bounds check.
Out-of-bounds Check for Merkle Tree Proof Arity
Description
Implementation of Merkle tree verification assumes proof vectors include witness of appropriate arity and indexes into array without checking bounds.
Recommendation
Include bounds check.
Out-of-bounds Check on Shares In Namespaced Recover
State
- Fixed
PR #3128
Severity
- Severity: Low
≈
Likelihood: High×
Impact: Low Submitted by
Nirvan Tyagi
Description
The
shares
vector is indexed without checking if it is empty.Recommendation
Include bounds check.
Out-of-bounds Check on Additional Shares in Namespaced Recover
State
- Fixed
PR #3128
Severity
- Severity: Low
≈
Likelihood: High×
Impact: Low Submitted by
Nirvan Tyagi
Description
The shares vector is indexed without checking if it is of appropriate length.
Recommendation
Include bounds check.
Informational2 findings
Signing Optimization to Avoid Extra Group Scalar Multiplication
State
- Acknowledged
Severity
- Severity: Informational
Submitted by
Blockdev
Description
During message signing, the public verification key is generated from the signing key. This is done because a subcall to
KeyPair::sign
is made in which the API requires aKeyPair
including both a signing key and verification key. The actual signing logic only requires the signing key; generation of the verification key is unneeded.Recommendation
There are two API changes that can be made to avoid this extra cost:
- (Option 1) Adapt
KeyPair::sign
to instead operate asSignKey::sign
- (Option 2) Change
SigningKey
to beKeyPair
Batch Merkle Tree Lookups for Efficient Lookups Over Contiguous Ranges
State
- Acknowledged
Severity
- Severity: Informational
Submitted by
Blockdev
Description
Merkle lookup proofs for a consecutive range of leaves can be made more efficient than simply providing a separate Merkle path for each leaf in the range. Intuitively, the first set of nodes in the Merkle paths for consecutive leaves are redundant and can be omitted as they can be computed from the leaves themselves.
Recommendation
Switch to batch lookup proofs for contiguous ranges.