Organization
- @Aztec-Labs
Engagement Type
Cantina Reviews
Period
-
Repositories
Findings
Low Risk3 findings
No restriction on num_bits in ranged_less_than
State
- New
Severity
- Severity: Low
≈
Likelihood: Low×
Impact: High Submitted by
ed255
Description
Using
ranged_less_thanwith a value ofnum_bitsgreater or equal than the number of bits of thenativefield makes the constraints unsound: the range check will just accept everything.I'm not sure if the creation of the range constraint with such big value for
num_bitswill compile and/or run without any crash/exception.Recommendation
Add a static assert to check that
num_bitsis lower than the number of bits minus one of the native field (K=2^num_bitsshould always be less than half of the modulus of the native field)Missing builder context validation in field operations
State
- New
Severity
- Severity: Low
≈
Likelihood: Low×
Impact: Low Submitted by
JakubHeba
Description
In arithmetic operations, the code selects a working context using patterns like
ctx = (context == nullptr) ? other.context : contextin operator+/operator*,ctx = (context) ? context : other.contextindivide_no_zero_check, andctx = first_non_null(context, other.context, third.context)in methods likemaddandadd_two.When operands have different non-null contexts, these operations proceed using one Builder's context while reading witness indices that belong to different Builders. Since witness indices are positions in a specific Builder's variables array, using indices from
builder2inbuilder1s context creates gates with wrong variable references.For example, if
builder1andbuilder2both have a witness at index 0 with values 10 and 20 respectively, an operation between them would incorrectly use value 10 twice instead of 10 and 20.Recommendation
Add validation to detect context mismatches when both operands are non-constant.
if (!is_constant() && !other.is_constant()) { ASSERT(context == other.context);}The set_public on constant field_t passes IS_CONSTANT as a wire index
State
- New
Severity
- Severity: Low
≈
Likelihood: Low×
Impact: Low Submitted by
JakubHeba
Description
The
set_publicis doing:return context->set_public_input(normalize().witness_index);but for a pure constant where
witness_index == IS_CONSTANT, it passesIS_CONSTANTtoset_public_input, which isn’t a valid wire index.In practice, that will either crash or silently misuse a public input slot.
Recommendation
We recommend adding an
ASSERT(!is_constant(), "error")to mitigate the issue.
Informational14 findings
[doc] Assumption over ranged_less_than is overly restrictive, method comment has some mistakes
State
- New
Severity
- Severity: Informational
≈
Likelihood: Medium×
Impact: Medium Submitted by
ed255
Description
The documentation of the method
ranged_less_thanmentions thataandbare assumed to be< 2^{num_bits} - 1but this check is not practical and over restrictive.It's practical to require an input to be of a certain number of bits. This can be achieved with a range check. The assumption that a number takes a number of bits would be
a <= 2^{num_bits} - 1. Notice the lower equal comparison used here VS the lower than comparison found in the documentation.The constraints of this method also work when
both a and b are <= 2^{num_bits} - 1The internal method comment is also a slightly incorrect with respect to the implementation.
- the implementation uses
K = 2^{num_bits}but the comment usesK = 2^{num_bits} - 1 - if
q == 1then0 < b - a - 1is not true. Givena = 0, b = 1we have0 < 1 - 0 - 1 <=> 0 < 0which is false.
Recommendation
Change the method documentation to say
This method *assumes* that both a and b are <= 2^{num_bits} - 1or
This method *assumes* that both a and b are < 2^{num_bits}Change the method comment to something like this:
// Let q = (a < b) // Assume both a and b are < K where K = 2^{num_bits} // q == 1 <=> 0 < b - a < K => 0 <= b - a - 1 < K // q == 0 <=> 0 < b - a + K < K => 0 <= b - a + K - 1 < K // i.e. for any bool value of q: // (b - a - 1) * q + (b - a + K - 1) * (1 - q) = r < K // q * (b - a - b + a) + b - a + K - 1 - (K - 1) * q - q = r < K // b - a + (K - 1) - K * q = r < KUse get_value to get a boolean value
State
- New
Severity
- Severity: Informational
≈
Likelihood: Low×
Impact: Low Submitted by
ed255
Description
When creating a
field_tfrom abool_tin the constant case, using thebool_t->get_valuemethod makes the code clearer than manually computing the boolean value withother.witness_bool ^ other.witness_inverted.If there's a future change in the
bool_tinternal implementation this would still work.Recommendation
additive_constant = other.get_value() ? bb::fr::one() : bb::fr::zero();bool_t has a constructor that takes context and value
State
- New
Severity
- Severity: Informational
≈
Likelihood: Low×
Impact: Low Submitted by
ed255
Description
This code can be simplified with an already existing constructor
Recommendation
bool_t<Builder> result(context, additive_constant == bb::fr::one());Inconsistent naming of plonk gate terms
State
- New
Severity
- Severity: Informational
≈
Likelihood: Low×
Impact: Low Submitted by
ed255
Description
The plonk gate terms are written in different ways along the code-base. I've encountered the following three styles:
q_m * w_l * w_r + q_l * w_l + q_r * w_r + q_o * w_o + q_4 * w_4 + q_cq_m * a * b + q_1 * a + q_2 * b + q_3 * c + q_cmul_scaling * a * b + a_scaling * a + b_scaling * b + c_scaling * c + d_scaling * d + const_scalingRecommendation
Use the same style/terms around the code-base where possible. Being consistent greatly helps the reader of the code to understand the logic with less overhead.
[opt] madd could be optimized for the case where 2 inputs are constant
State
- New
Severity
- Severity: Informational
≈
Likelihood: Low×
Impact: Low Submitted by
ed255
Description
The current implementation of
maddoptimizes the case where the 3 inputs are constants, otherwise it generates one constraint. If two inputs are constant (and the third one contains a witness) the constraint can be skipped.The same optimization could be applied to
add_two.Recommendation
If there are chances of using
maddwith two constant inputs, applying the described optimization would reduce the total number of gates in a circuit. This could be evaluated by taking different existing circuits and counting how many instances ofmaddare being called with two constant inputs.Unnecessary witness multiplied by 0 in normalize
State
- New
Severity
- Severity: Informational
≈
Likelihood: Low×
Impact: Low Submitted by
ed255
Description
The
normalizemethod creates a gate that doesthis.mul * this.v + 0 * this.v + result.v * (-1) + this.add = 0. The assignment of the termb = this.vstands out because the multiplicative term is 0.Recommendation
Assigning
context->zero_idxto thebterm would make the code more clear as the reader doesn't have to figure out why thewitness_indexis used twice in the gate, only to realize that in one instance it's being multiplied by 0.For example the method
assert_is_zerofollows this pattern of usingcontext->zero_idxand is more readable.[opt] assert_equal optimization
State
- New
Severity
- Severity: Informational
≈
Likelihood: Low×
Impact: Low Submitted by
ed255
Description
When
lhsandrhsinassert_equalare non-constant, bothfield_ts are normalized. If bothfield_ts weren't already normalized, two gates will be used.For such a case, instead of doing a copy constraint, a single gate could be used that constraints the two
field_ts to be equal.Recommendation
When
lhsandrhsare not normalized and not constant, instead of normalization and copy constraining apply the following gate:lhs - rhs = 0lhs.v * lhs.mul + lhs.add - (rhs.v * rhs.mul + rhs.add) = 0lhs.mul * lhs.v + (-rhs.mul) * rhs.v + (lhs.add - rhs.add) = 0---a = lhs.vb = rhs.vq_m = 0q_l = lhs.mulq_r = -rhs.mulq_c = lhs.add - rhs.addThis reduces the number of used gates from 2 to 1 in this case.
[doc] confusing comment
State
- New
Severity
- Severity: Informational
≈
Likelihood: Low×
Impact: Low Submitted by
ed255
Description
In
accumulate, the calculation of the witness result has a comment about a conditional addition of the constant values wheninputcontains non-constantfield_telements. But the code earlier has checked for the case when there are 0 non-constantfield_telements in theinputwith an early return:if (accumulator.empty()) { return constant_term; }So at this point of the function, we know that the input contains non-constant
field_telements. And we unconditionally calculate the witness of the addition over all inputs.Recommendation
Update the comment to reflect the current code.
[doc] incorrect documentation of slice method
State
- New
Severity
- Severity: Informational
≈
Likelihood: Low×
Impact: Low Submitted by
ed255
Description
The slice method slices a field into 3 parts which are named
lo, slice, hiin the code corresponding to the least significant part, the middle part and the most significant part of the original element. The code has some comments that treat the bit0as the least significant and the bit255as the most significant. The code describes the following bit ranges for each part:lo: [0,lsb - 1],slice: [lsb, msb]andhi: [msb + 1, 255].Nevertheless the method documentation describes the bit ranges as
[0, msb-1], [msb, lsb], and [lsb+1, 256], which is inconsistent with the code comments. It's also inconsistent with the order of the returned elements.Recommendation
Update the method documentation to describe the bit ranges as
[0,lsb - 1], [lsb, msb] and [msb + 1, 255][opt] decompose_into_bits with num_bits = 253 does unnecessary aliasing check
State
- New
Severity
- Severity: Informational
≈
Likelihood: Low×
Impact: Low Submitted by
ed255
Description
The
decompose_into_bitsmethod performs an aliasing check to make sure the bit decomposition has a unique solution (the canonical representation). The extra constraints for that are only supposed to be generated when an aliasing can occur.Nevertheless when
num_bits = 253the aliasing check constraints are generated but there can't be aliasing:The code calculates
num_bits_modulus_minus_one = 253. This is correct because the log2 of the modulus is 253.6: so it takes 254 bits to represent anfrelement.If
num_bits = 253the representation must be unique, because an overflow would require an extra bit (With 254 bits we can represent 2^254 values which is bigger than 2^253.6)Recommendation
Only apply the aliasing check constraints when
num_bits > num_bits_modulus_minus_one(notice the change from lower equal to strictly lower)[opt] The field_t multiplication by zero result not canonicalized to constant
State
- New
Severity
- Severity: Informational
≈
Likelihood: Low×
Impact: Low Submitted by
JakubHeba
Description
Whenever you multiply a non‑constant field by the constant zero, the code produces a
field_twithresult.additive_constant = additive_constant * other.additive_constant;result.multiplicative_constant = multiplicative_constant * other.additive_constant;result.witness_index = witness_index;so
is_constantremainsfalseeven though the represented value is exactly zero. Subsequent branches like inassert_is_zerooroperator+will treat it as a variable, creating unnecessary gates.Recommendation
We recommend that after any operation that yields
multiplicative_constant == 0, it should turn into atrueconstant.[opt] Duplicate constraint in invert
State
- New
Severity
- Severity: Informational
≈
Likelihood: Low×
Impact: Low Submitted by
ed255
Description
The
invertmethod calls division with a dividend set to the constant 1. This creates 2 constraints: the first proves that the divisor is not zero, and the second one proves the result of the division. But when the dividend is a constant 1, those two constraints are the same.Consider
- input =
w_1*3 + 5
The
assert_is_not_zerogenerates the following constraint(w_1*3 + 5)*w_2 = 1 <=> 3*w_1*w_2 + 5*w_2 - 1 = 0wherew_2is the inverse ofw_1
Then
divide_no_zero_checkgenerates the following constraint for the result witnessw_3(w_1*3 + 5)*w_3 = 1 <=> 3*w_3*w_1 + 5*w_3 - 1 = 0
The solution to the division is the inverse of the input, which we already got via the constrain in
assert_is_not_zeroRecommendation
The
operator/check for the divisor to not be0is only required if the dividend can be0. If we know the dividend is not0(the case where it is a non-zero constant) we can skip that constraint.unexpected random value in test_slice_random
State
- New
Severity
- Severity: Informational
≈
Likelihood: Low×
Impact: Low Submitted by
ed255
Description
The test
test_slice_randomis expected to test the slice method with a random input that as 252 bits. To get this number a random value is generated and then it's expected to be masked, but the mask operator is a logical or instead of a bitwise or, leading to a boolean result that is 1 with high provability.Recommendation
Replace the logical or by a bitwise or:
fr a_ = fr(engine.get_random_uint256() & ((uint256_t(1) << 252) - 1));Notice the
&vs&&.test_slice_random not covering the entire input space
State
- New
Severity
- Severity: Informational
≈
Likelihood: Low×
Impact: Low Submitted by
ed255
Description
The
test_slice_randomprepares an input that is 252 bits. But theslicemethod works with an input that is253bits. The slice method doesn't work with values that need more than253to avoid aliasing (multiple valid solutions that are congruent modulo p).The
slicemethod constraints:lo < 2^lsbmid < 2^(msb+1 - lsb)hi < 2^(252 - msb)
Then the linear combination leads to the following amount of bits:
lsb + msb + 1 - lsb + 252 - msb = 253
Perhaps the constraint on the number of bits for
hiinslicewould be more clear like this:hi_wit.create_range_constraint(253 - msb_plus_one, "slice: hi value too large.");Proof of Concept
Recommendation
Mask the input to be 253 bits instead of 252 bits:
fr a_ = fr(engine.get_random_uint256() & ((uint256_t(1) << 253) - 1));Calculate the high part correctly:
const uint256_t expected2 = (uint256_t(a_) >> (msb + 1)) & ((uint256_t(1) << (253 - msb - 1)) - 1);