Aztec Labs

Aztec stdlib field

Cantina Security Report

Organization

@Aztec-Labs

Engagement Type

Cantina Reviews

Period

-

Researchers


Findings


Low Risk3 findings

  1. No restriction on num_bits in ranged_less_than

    State

    New

    Severity

    Severity: Low

    Likelihood: Low

    ×

    Impact: High

    Submitted by

    ed255


    Description

    Using ranged_less_than with a value of num_bits greater or equal than the number of bits of the native field makes the constraints unsound: the range check will just accept everything.

    I'm not sure if the creation of the range constraint with such big value for num_bits will compile and/or run without any crash/exception.

    Recommendation

    Add a static assert to check that num_bits is lower than the number of bits minus one of the native field (K=2^num_bits should always be less than half of the modulus of the native field)

  2. Missing builder context validation in field operations

    State

    New

    Severity

    Severity: Low

    Likelihood: Low

    ×

    Impact: Low

    Submitted by

    JakubHeba


    Description

    In arithmetic operations, the code selects a working context using patterns like ctx = (context == nullptr) ? other.context : context in operator+/operator*, ctx = (context) ? context : other.context in divide_no_zero_check, and ctx = first_non_null(context, other.context, third.context) in methods like madd and add_two.

    When operands have different non-null contexts, these operations proceed using one Builder's context while reading witness indices that belong to different Builders. Since witness indices are positions in a specific Builder's variables array, using indices from builder2 in builder1s context creates gates with wrong variable references.

    For example, if builder1 and builder2 both have a witness at index 0 with values 10 and 20 respectively, an operation between them would incorrectly use value 10 twice instead of 10 and 20.

    Recommendation

    Add validation to detect context mismatches when both operands are non-constant.

    if (!is_constant() && !other.is_constant()) {    ASSERT(context == other.context);}
  3. The set_public on constant field_t passes IS_CONSTANT as a wire index

    State

    New

    Severity

    Severity: Low

    Likelihood: Low

    ×

    Impact: Low

    Submitted by

    JakubHeba


    Description

    The set_public is doing:

    return context->set_public_input(normalize().witness_index);

    but for a pure constant where witness_index == IS_CONSTANT, it passes IS_CONSTANT to set_public_input, which isn’t a valid wire index.

    In practice, that will either crash or silently misuse a public input slot.

    Recommendation

    We recommend adding an ASSERT(!is_constant(), "error") to mitigate the issue.

Informational14 findings

  1. [doc] Assumption over ranged_less_than is overly restrictive, method comment has some mistakes

    State

    New

    Severity

    Severity: Informational

    Likelihood: Medium

    ×

    Impact: Medium

    Submitted by

    ed255


    Description

    The documentation of the method ranged_less_than mentions that a and b are assumed to be < 2^{num_bits} - 1 but this check is not practical and over restrictive.

    It's practical to require an input to be of a certain number of bits. This can be achieved with a range check. The assumption that a number takes a number of bits would be a <= 2^{num_bits} - 1. Notice the lower equal comparison used here VS the lower than comparison found in the documentation.

    The constraints of this method also work when both a and b are <= 2^{num_bits} - 1

    The internal method comment is also a slightly incorrect with respect to the implementation.

    • the implementation uses K = 2^{num_bits} but the comment uses K = 2^{num_bits} - 1
    • if q == 1 then 0 < b - a - 1 is not true. Given a = 0, b = 1 we have 0 < 1 - 0 - 1 <=> 0 < 0 which is false.

    Recommendation

    Change the method documentation to say

    This method *assumes* that both a and b are <= 2^{num_bits} - 1

    or

    This method *assumes* that both a and b are < 2^{num_bits}

    Change the method comment to something like this:

    // Let q = (a < b)        // Assume both a and b are < K where K = 2^{num_bits}        //    q == 1 <=>  0 < b - a     < K =>  0 <= b - a - 1     < K        //    q == 0 <=>  0 < b - a + K < K =>  0 <= b - a + K - 1 < K        // i.e. for any bool value of q:        //    (b - a - 1) * q + (b - a + K - 1) * (1 - q) = r < K        //     q * (b - a - b + a) + b - a + K - 1 - (K - 1) * q - q = r < K        //     b - a + (K - 1) - K * q = r < K
  2. Use get_value to get a boolean value

    State

    New

    Severity

    Severity: Informational

    Likelihood: Low

    ×

    Impact: Low

    Submitted by

    ed255


    Description

    When creating a field_t from a bool_t in the constant case, using the bool_t->get_value method makes the code clearer than manually computing the boolean value with other.witness_bool ^ other.witness_inverted.

    If there's a future change in the bool_t internal implementation this would still work.

    Recommendation

    additive_constant = other.get_value() ? bb::fr::one() : bb::fr::zero();
  3. bool_t has a constructor that takes context and value

    State

    New

    Severity

    Severity: Informational

    Likelihood: Low

    ×

    Impact: Low

    Submitted by

    ed255


    Description

    This code can be simplified with an already existing constructor

    Recommendation

    bool_t<Builder> result(context, additive_constant == bb::fr::one());
  4. Inconsistent naming of plonk gate terms

    State

    New

    Severity

    Severity: Informational

    Likelihood: Low

    ×

    Impact: Low

    Submitted by

    ed255


    Description

    The plonk gate terms are written in different ways along the code-base. I've encountered the following three styles:

    q_m * w_l * w_r + q_l * w_l + q_r * w_r + q_o * w_o + q_4 * w_4 + q_c
    q_m * a * b + q_1 * a + q_2 * b + q_3 * c + q_c
    mul_scaling * a * b + a_scaling * a + b_scaling * b + c_scaling * c + d_scaling * d + const_scaling

    Recommendation

    Use the same style/terms around the code-base where possible. Being consistent greatly helps the reader of the code to understand the logic with less overhead.

  5. [opt] madd could be optimized for the case where 2 inputs are constant

    State

    New

    Severity

    Severity: Informational

    Likelihood: Low

    ×

    Impact: Low

    Submitted by

    ed255


    Description

    The current implementation of madd optimizes the case where the 3 inputs are constants, otherwise it generates one constraint. If two inputs are constant (and the third one contains a witness) the constraint can be skipped.

    The same optimization could be applied to add_two.

    Recommendation

    If there are chances of using madd with two constant inputs, applying the described optimization would reduce the total number of gates in a circuit. This could be evaluated by taking different existing circuits and counting how many instances of madd are being called with two constant inputs.

  6. Unnecessary witness multiplied by 0 in normalize

    State

    New

    Severity

    Severity: Informational

    Likelihood: Low

    ×

    Impact: Low

    Submitted by

    ed255


    Description

    The normalize method creates a gate that does this.mul * this.v + 0 * this.v + result.v * (-1) + this.add = 0. The assignment of the term b = this.v stands out because the multiplicative term is 0.

    Recommendation

    Assigning context->zero_idx to the b term would make the code more clear as the reader doesn't have to figure out why the witness_index is used twice in the gate, only to realize that in one instance it's being multiplied by 0.

    For example the method assert_is_zero follows this pattern of using context->zero_idx and is more readable.

  7. [opt] assert_equal optimization

    State

    New

    Severity

    Severity: Informational

    Likelihood: Low

    ×

    Impact: Low

    Submitted by

    ed255


    Description

    When lhs and rhs in assert_equal are non-constant, both field_ts are normalized. If both field_ts weren't already normalized, two gates will be used.

    For such a case, instead of doing a copy constraint, a single gate could be used that constraints the two field_ts to be equal.

    Recommendation

    When lhs and rhs are not normalized and not constant, instead of normalization and copy constraining apply the following gate:

    lhs - rhs = 0lhs.v * lhs.mul + lhs.add - (rhs.v * rhs.mul + rhs.add) = 0lhs.mul * lhs.v + (-rhs.mul) * rhs.v + (lhs.add - rhs.add) = 0---a = lhs.vb = rhs.vq_m = 0q_l = lhs.mulq_r = -rhs.mulq_c = lhs.add - rhs.add

    This reduces the number of used gates from 2 to 1 in this case.

  8. [doc] confusing comment

    State

    New

    Severity

    Severity: Informational

    Likelihood: Low

    ×

    Impact: Low

    Submitted by

    ed255


    Description

    In accumulate, the calculation of the witness result has a comment about a conditional addition of the constant values when input contains non-constant field_t elements. But the code earlier has checked for the case when there are 0 non-constant field_t elements in the input with an early return:

    if (accumulator.empty()) {
            return constant_term;
        }

    So at this point of the function, we know that the input contains non-constant field_t elements. And we unconditionally calculate the witness of the addition over all inputs.

    Recommendation

    Update the comment to reflect the current code.

  9. [doc] incorrect documentation of slice method

    State

    New

    Severity

    Severity: Informational

    Likelihood: Low

    ×

    Impact: Low

    Submitted by

    ed255


    Description

    The slice method slices a field into 3 parts which are named lo, slice, hi in the code corresponding to the least significant part, the middle part and the most significant part of the original element. The code has some comments that treat the bit 0 as the least significant and the bit 255 as the most significant. The code describes the following bit ranges for each part: lo: [0,lsb - 1], slice: [lsb, msb] and hi: [msb + 1, 255].

    Nevertheless the method documentation describes the bit ranges as [0, msb-1], [msb, lsb], and [lsb+1, 256], which is inconsistent with the code comments. It's also inconsistent with the order of the returned elements.

    Recommendation

    Update the method documentation to describe the bit ranges as [0,lsb - 1], [lsb, msb] and [msb + 1, 255]

  10. [opt] decompose_into_bits with num_bits = 253 does unnecessary aliasing check

    State

    New

    Severity

    Severity: Informational

    Likelihood: Low

    ×

    Impact: Low

    Submitted by

    ed255


    Description

    The decompose_into_bits method performs an aliasing check to make sure the bit decomposition has a unique solution (the canonical representation). The extra constraints for that are only supposed to be generated when an aliasing can occur.

    Nevertheless when num_bits = 253 the aliasing check constraints are generated but there can't be aliasing:

    The code calculates num_bits_modulus_minus_one = 253. This is correct because the log2 of the modulus is 253.6: so it takes 254 bits to represent an fr element.

    If num_bits = 253 the representation must be unique, because an overflow would require an extra bit (With 254 bits we can represent 2^254 values which is bigger than 2^253.6)

    Recommendation

    Only apply the aliasing check constraints when num_bits > num_bits_modulus_minus_one (notice the change from lower equal to strictly lower)

  11. [opt] The field_t multiplication by zero result not canonicalized to constant

    State

    New

    Severity

    Severity: Informational

    Likelihood: Low

    ×

    Impact: Low

    Submitted by

    JakubHeba


    Description

    Whenever you multiply a non‑constant field by the constant zero, the code produces a field_t with

    result.additive_constant = additive_constant * other.additive_constant;result.multiplicative_constant = multiplicative_constant * other.additive_constant;result.witness_index = witness_index;

    so is_constant remains false even though the represented value is exactly zero. Subsequent branches like in assert_is_zero or operator+ will treat it as a variable, creating unnecessary gates.

    Recommendation

    We recommend that after any operation that yields multiplicative_constant == 0, it should turn into a true constant.

  12. [opt] Duplicate constraint in invert

    State

    New

    Severity

    Severity: Informational

    Likelihood: Low

    ×

    Impact: Low

    Submitted by

    ed255


    Description

    The invert method calls division with a dividend set to the constant 1. This creates 2 constraints: the first proves that the divisor is not zero, and the second one proves the result of the division. But when the dividend is a constant 1, those two constraints are the same.

    Consider

    • input = w_1*3 + 5

    The assert_is_not_zero generates the following constraint

    • (w_1*3 + 5)*w_2 = 1 <=> 3*w_1*w_2 + 5*w_2 - 1 = 0 where w_2 is the inverse of w_1

    Then divide_no_zero_check generates the following constraint for the result witness w_3

    • (w_1*3 + 5)*w_3 = 1 <=> 3*w_3*w_1 + 5*w_3 - 1 = 0

    The solution to the division is the inverse of the input, which we already got via the constrain in assert_is_not_zero

    Recommendation

    The operator/ check for the divisor to not be 0 is only required if the dividend can be 0. If we know the dividend is not 0 (the case where it is a non-zero constant) we can skip that constraint.

  13. unexpected random value in test_slice_random

    State

    New

    Severity

    Severity: Informational

    Likelihood: Low

    ×

    Impact: Low

    Submitted by

    ed255


    Description

    The test test_slice_random is expected to test the slice method with a random input that as 252 bits. To get this number a random value is generated and then it's expected to be masked, but the mask operator is a logical or instead of a bitwise or, leading to a boolean result that is 1 with high provability.

    Recommendation

    Replace the logical or by a bitwise or:

    fr a_ = fr(engine.get_random_uint256() & ((uint256_t(1) << 252) - 1));

    Notice the & vs &&.

  14. test_slice_random not covering the entire input space

    State

    New

    Severity

    Severity: Informational

    Likelihood: Low

    ×

    Impact: Low

    Submitted by

    ed255


    Description

    The test_slice_random prepares an input that is 252 bits. But the slice method works with an input that is 253 bits. The slice method doesn't work with values that need more than 253 to avoid aliasing (multiple valid solutions that are congruent modulo p).

    The slice method constraints:

    • lo < 2^lsb
    • mid < 2^(msb+1 - lsb)
    • hi < 2^(252 - msb)

    Then the linear combination leads to the following amount of bits:

    • lsb + msb + 1 - lsb + 252 - msb = 253

    Perhaps the constraint on the number of bits for hi in slice would be more clear like this:

    hi_wit.create_range_constraint(253 - msb_plus_one, "slice: hi value too large.");

    Proof of Concept

    Recommendation

    Mask the input to be 253 bits instead of 252 bits:

    fr a_ = fr(engine.get_random_uint256() & ((uint256_t(1) << 253) - 1));

    Calculate the high part correctly:

    const uint256_t expected2 = (uint256_t(a_) >> (msb + 1)) & ((uint256_t(1) << (253 - msb - 1)) - 1);