Skip to content

recursion: add R1CS dump/shrink tooling#1

Open
sidhujag wants to merge 143 commits intomainfrom
dump-r1cs
Open

recursion: add R1CS dump/shrink tooling#1
sidhujag wants to merge 143 commits intomainfrom
dump-r1cs

Conversation

@sidhujag
Copy link
Member

Summary

Adds utilities and plumbing to dump, shrink, and verify recursion/R1CS constraints for SP1 programs, plus supporting updates across the recursion toolchain and gnark-ffi.

Changes

  • Add dump_shrink_verify_constraints prover binary and related helpers.
  • Extend recursion compiler/R1CS pipeline (BabyBear + Poseidon2 + LF backends) to support exporting constraint systems.
  • Update gnark-ffi assets + Go bindings for updated verifier/proving flow.
  • Add/adjust field + Poseidon2 constants plumbing (incl. BLS12-377-related field module).

Test plan

  • cargo test -p prover -p recursion-compiler -p recursion-circuit -p verifier
  • Run cargo run -p prover --bin dump_shrink_verify_constraints -- --help and a representative program config.
  • (Optional) run gnark-ffi Go tests under crates/recursion/gnark-ffi/go/.

SP1_ALLOW_DEPRECATED_HOOKS=true SHRINK_PROOF_CACHE=/tmp/shrink_proof.bin OUT_R1CS_LF=/tmp/shrink_verifier.r1lf OUT_WITNESS=/tmp/shrink_verifier.witness.u64le RUSTFLAGS="-C target-cpu=native" RAYON_NUM_THREADS=96 cargo run -p sp1-prover --bin dump_shrink_verify_constraints --release

jagdeep sidhu added 30 commits January 4, 2026 23:03
Add a new R1CS compiler that converts DslIr operations to R1CS constraints,
enabling lattice-based folding schemes (Symphony) to verify SP1 proofs.

- R1CS compiler with handlers for all shrink verifier opcodes
- Self-verifying binary format with embedded SHA256 digest
- Poseidon2 permutation MDS/round constants for BabyBear
- dump_shrink_verify_constraints binary to export shrink verifier R1CS
- Comprehensive unit tests for FriFold, ExpReverseBits, BatchFRI

Usage: OUT_R1CS=shrink.r1cs cargo run -p sp1-prover \
       --bin dump_shrink_verify_constraints --release
@sidhujag
Copy link
Member Author

@codex review

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 06b9bb4dbb

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines 2673 to 2677
for felt in public_values.sp1_vk_digest.iter() {
let idx = self.get_var(&felt.id(), ctx.as_deref_mut());
debug_assert!(
idx >= 1 && idx <= self.r1cs.num_public,
"CircuitV2CommitPublicValues(sp1_vk_digest) must refer to a public-input slot (idx={}, num_public={})",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Align public-input checks with digest-only export

Phase 0 explicitly exports only RecursionPublicValues.digest as public inputs (see phase0_collect_public_ids around lines 260–272), but CircuitV2CommitPublicValues asserts that sp1_vk_digest (and later committed_value_digest) must occupy the public-input prefix. In BabyBear recursion circuits those fields are not preallocated, so debug builds will panic when compiling the shrink verifier (e.g., dump_shrink_verify_constraints in debug), and release builds will silently accept a public-input layout that contradicts these checks. Consider validating the digest IDs here instead, or expand Phase 0 to include the other fields if they are meant to be public.

Useful? React with 👍 / 👎.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

jagdeep sidhu added 3 commits January 20, 2026 19:25
Phase 0 explicitly exports only RecursionPublicValues.digest as public inputs (see phase0_collect_public_ids around lines 260–272), but CircuitV2CommitPublicValues asserts that sp1_vk_digest (and later committed_value_digest) must occupy the public-input prefix. In BabyBear recursion circuits those fields are not preallocated, so debug builds will panic when compiling the shrink verifier (e.g., dump_shrink_verify_constraints in debug), and release builds will silently accept a public-input layout that contradicts these checks. Consider validating the digest IDs here instead, or expand Phase 0 to include the other fields if they are meant to be public.
@sidhujag
Copy link
Member Author

@codex review

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 52a1d4c754

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +535 to +542
#[inline]
fn eval_row_i128<F: PrimeField64>(row: &SparseRowI64, w: &[F]) -> i128 {
row.terms
.iter()
.map(|(idx, coeff)| {
let v = bb_coeff_to_centered_i64(w[*idx].as_canonical_u64()) as i128;
(*coeff as i128) * v
})

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Compute lift witness with canonical values

The auxiliary q/carry values are derived via eval_row_i128, which converts witness values to centered reps (bb_coeff_to_centered_i64). However the exported LF witness (w_out) is in canonical [0,p) form, and the lifted constraints will be evaluated against those canonical values. For any witness element > p/2 (e.g. p-1), the centered computation yields a different a*b-c and therefore a different quotient, so the generated witness will not satisfy the lifted R1LF constraint in LF. This can surface as unsatisfied constraints during verification for perfectly valid shrink proofs. Consider computing a_int/b_int/c_int using canonical values (or exporting centered witness values consistently).

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant