Skip to content

Comments

Bivariate Polynomials#103

Open
dhsorens wants to merge 8 commits intomasterfrom
dhsorens/bivariate
Open

Bivariate Polynomials#103
dhsorens wants to merge 8 commits intomasterfrom
dhsorens/bivariate

Conversation

@dhsorens
Copy link
Collaborator

As described on the Roadmap, for Phase 1 we want to have a basic, computable implementation for Bivariate polynomials. This pull request adds a Bivariate directory and a new type CBivariate. It defines bivariate polynomials as nested computable Univariate polynomials: CPolynomial (CPolynomial R). It includes the basic type definition, basic definitions for operations, and a basic structure for the equivalence with Mathlib's R[X][Y].

The rough outline of continued work on this, mimicking what's on Univariate:

  • Define computable operations and API on CBivariate
  • Prove ring equivalence with Mathlib's R[X][Y]
  • Prove operations and API correct wrt the Mathlib spec, over the ring equivalence

@dhsorens
Copy link
Collaborator Author

dhsorens commented Feb 18, 2026

@katyhr, would love your feedback and comments on this!

Main questions:

  1. are we happy with CPolynomial (CPolynomial R) as the formalization of the Bivariate type?
  2. I chose the namespace to be CompPoly.CBivariate to mimic the pattern of CompPoly.CPolynomial and CompPoly.CMvPolynomial - but I wonder if we should do CompPoly.CPolynomial.Bivariate to match Arklib, and since it's a special case of CPolynomial?
  3. the type signature of the toPoly isomorphism is currently into R[X][Y] - would you say this is right?

@github-actions
Copy link

github-actions bot commented Feb 18, 2026

🤖 Gemini PR Summary

This Pull Request introduces the initial implementation of computable bivariate polynomials (CBivariate) as part of Phase 1 of the project roadmap. The implementation follows a nested approach, defining bivariate polynomials as CPolynomial (CPolynomial R).

Beyond the core feature, this PR includes a significant toolchain adjustment (downgrading to Lean v4.26.0) and extensive proof refactoring across the binary field and univariate modules to improve maintainability and tactic stability.

Features

  • Bivariate Polynomial Type: Introduced CBivariate, implemented as nested computable univariate polynomials.
  • Core API: Added basic operations for CBivariate, including evaluation, degree calculation, and coefficient access.
  • Mathlib Integration: Established the initial framework for ring equivalence between CBivariate and Mathlib’s R[X][Y].
  • Univariate Instances: Implemented BEq, LawfulBEq, DecidableEq, and Nontrivial instances for the existing CPolynomial type to enhance its utility in computable contexts.

Fixes

  • Library Compatibility: Updated lemma names and instances (e.g., Fin, Finsupp, AddMonoidWithOneNat) to align with current library conventions and toolchain requirements.
  • Univariate Logic: Completed the proof for sub-array extraction size in coeff_eq_getElem_of_lt within the Raw univariate implementation.
  • Dependency Management: Downgraded the Lean toolchain and Mathlib dependencies to v4.26.0 to ensure environment stability.

Refactoring

  • Proof Optimization: Streamlined complex proofs in the BF128Ghash and Tower field modules by replacing heavy automation and recursion depth overrides with explicit tactics and simplified logic.
  • Tactic Migration: Replaced uses of the grind tactic with configured aesop calls in multivariate equivalence proofs to improve proof robustness.
  • Common Logic: Refined bit-vector to polynomial conversions by replacing auxiliary lemmas with direct summation, reducing code footprint.
  • Univariate Clean-up: Refactored zipIdx_trim_append using Trim.elim for more idiomatic case analysis.

Documentation

  • Bivariate Module: Added a new README.md within the CompPoly/Bivariate directory to document the design decisions and compatibility goals for the nested polynomial implementation.
  • Style Maintenance: Updated style-exceptions.txt to reflect reduced line counts in refactored files.

Analysis of Changes

Metric Count
📝 Files Changed 23
Lines Added 619
Lines Removed 578

sorry Tracking

❌ **Added:** 7 `sorry`(s)
  • theorem toPoly_ofPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] [DecidableEq R] in CompPoly/Bivariate/ToPoly.lean
  • theorem coeff_toPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • theorem degreeX_toPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • theorem join_eq_iff_dcast_extractLsb {k : ℕ} (h_pos : k > 0) (x : ConcreteBTField k) in CompPoly/Fields/Binary/Tower/Impl.lean
  • theorem natDegreeY_toPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean
  • theorem ofPoly_toPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] [DecidableEq R] in CompPoly/Bivariate/ToPoly.lean
  • theorem evalEval_toPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R] in CompPoly/Bivariate/ToPoly.lean

🎨 **Style Guide Adherence**

The following code changes violate the CompPoly Style and Naming Guidelines:

  • CompPoly/Bivariate/Basic.lean:30: "Indentation: Use 2 spaces for indentation." (Indented 4 spaces).
  • CompPoly/Bivariate/Basic.lean:69: "Acronyms: Treat as words (e.g., HtmlParser not HTMLParser)." (Uses CC instead of Cc).
  • CompPoly/Bivariate/Basic.lean:72: "Functions and Terms: lowerCamelCase (e.g., binarySearch, isPrime)." (Uses X instead of x).
  • CompPoly/Bivariate/Basic.lean:75: "Functions and Terms: lowerCamelCase (e.g., binarySearch, isPrime)." (Uses Y instead of y).
  • CompPoly/Fields/Binary/BF128Ghash/Basic.lean:67: "Empty Lines: Avoid empty lines inside definitions or proofs."
  • CompPoly/Fields/Binary/BF128Ghash/Basic.lean:71: "Empty Lines: Avoid empty lines inside definitions or proofs."
  • CompPoly/Fields/Binary/BF128Ghash/Basic.lean:75: "Empty Lines: Avoid empty lines inside definitions or proofs."
  • CompPoly/Fields/Binary/BF128Ghash/Basic.lean:79: "Empty Lines: Avoid empty lines inside definitions or proofs."
  • CompPoly/Fields/Binary/BF128Ghash/Basic.lean:83: "Empty Lines: Avoid empty lines inside definitions or proofs."
  • CompPoly/Fields/Binary/BF128Ghash/Basic.lean:117: "Empty Lines: Avoid empty lines inside definitions or proofs."
  • CompPoly/Fields/Binary/Tower/Basic.lean:1441: "Comments: Use -- for inline comments and /- ... -/ for block comments." (Uses triple dashes ---).
  • CompPoly/Fields/Binary/Tower/Impl.lean:73: "Line Length: Keep lines under 100 characters." (Line is 108 characters).
  • CompPoly/Fields/Binary/Tower/Impl.lean:3266: "Comments: Use -- for inline comments and /- ... -/ for block comments." (Uses triple dashes ---).
  • CompPoly/Fields/Binary/Tower/Impl.lean:3280: "Comments: Use -- for inline comments and /- ... -/ for block comments." (Uses triple dashes ---).

📄 **Per-File Summaries**
  • CompPoly.lean: This update adds imports for bivariate polynomial definitions and conversions to the main project file.
  • CompPoly/Bivariate/Basic.lean: Defines the CBivariate type as a nested structure of computable univariate polynomials and implements core operations such as evaluation, degree calculations, and coefficient access.
  • CompPoly/Bivariate/README.md: Introduces documentation for the CBivariate type, which represents computable bivariate polynomials as nested univariate polynomials compatible with Mathlib.
  • CompPoly/Bivariate/ToPoly.lean: Establishes the conversion and ring equivalence between the custom CBivariate type and Mathlib's bivariate polynomials (R[X][Y]).
  • CompPoly/Data/List/Lemmas.lean: Update the proof of getLastI_append_single by refining the simplification lemmas to use more direct library references.
  • CompPoly/Fields/Binary/AdditiveNTT/AdditiveNTT.lean: Update Fin and Finsupp lemma names to maintain compatibility with updated naming conventions in the core library.
  • CompPoly/Fields/Binary/BF128Ghash/Basic.lean: Refactor and simplify the proof of Rabin's irreducibility test for degree 128 by streamlining tactics and removing an unnecessary recursion depth limit.
  • CompPoly/Fields/Binary/BF128Ghash/Impl.lean: Optimize proofs by replacing heavy automation with explicit tactics and improving the handling of type coercions.
  • CompPoly/Fields/Binary/BF128Ghash/Prelude.lean: Removes the maxRecDepth configuration option as it is no longer required for the file's proofs.
  • CompPoly/Fields/Binary/BF128Ghash/XPowTwoPowGcdCertificate.lean: Refines the proof of the toPoly_gcd_b_0_val_mod_ghashPoly_eq_gcd_b_0_val lemma by streamlining tactic applications and adding the Fin.coe_castAdd simplification rule.
  • CompPoly/Fields/Binary/Common.lean: Streamline the proofs for bit-vector to polynomial conversion by replacing several auxiliary lemmas with direct summation and refined tactic usage.
  • CompPoly/Fields/Binary/Tower/Basic.lean: Refactors the proofs for binary tower field bases, specifically multilinearBasis_apply, to improve the handling of type casts and remove redundant lemmas.
  • CompPoly/Fields/Binary/Tower/Impl.lean: Refactor and simplify the proofs for the binary field tower implementation, specifically streamlining the commutativity results between concrete and abstract tower algebra maps.
  • CompPoly/Fields/PrattCertificate.lean: Updates the pratt tactic to use the correctly named instAddMonoidWithOneNat instance when deriving natural numbers.
  • CompPoly/Multivariate/MvPolyEquiv.lean: Refactor the proof of degreeOf_equiv by replacing the grind tactic with configured calls to aesop.
  • CompPoly/ToMathlib/Finsupp/Fin.lean: Update the simp proofs for snoc_zero_zero and insertNth_zero_zero to use equivFunOnFinite_symm_apply_toFun instead of coe_equivFunOnFinite_symm.
  • CompPoly/Univariate/Basic.lean: Implement BEq, LawfulBEq, DecidableEq, and Nontrivial instances for the CPolynomial type.
  • CompPoly/Univariate/Quotient.lean: Refactors the proof of zipIdx_trim_append to use Trim.elim and a more explicit case analysis based on whether the trimmed polynomial is empty.
  • CompPoly/Univariate/Raw.lean: Completed the proof for the size of an extracted sub-array in the coeff_eq_getElem_of_lt theorem by justifying the extraction range.
  • lake-manifest.json: Update various project dependencies, including mathlib and Cli, by downgrading them to older revisions and versions.
  • lakefile.lean: Downgrade the mathlib and ExtTreeMapLemmas dependencies to version v4.26.0.
  • lean-toolchain: Downgrade the Lean 4 toolchain version from v4.28.0 to v4.26.0.
  • scripts/style-exceptions.txt: Update the line count exception for CompPoly/Fields/Binary/Tower/Impl.lean to reflect its reduced size.

Last updated: 2026-02-20 16:32 UTC.

@dhsorens
Copy link
Collaborator Author

Currently kind of waffling on the namespace definition - we should either have bivariate polynomials defined within CPolynomial as CPolynomial.Bivariate R := CPolynomial (CPolynomial R) or in its own top-level namespace CBivariate - can't decide!

@alexanderlhicks
Copy link
Collaborator

I kinda lean in favour of CBivariate for conciseness

@alexanderlhicks
Copy link
Collaborator

/review

External:

Internal:

Comments:

@alexanderlhicks alexanderlhicks self-assigned this Feb 20, 2026
@dhsorens
Copy link
Collaborator Author

dhsorens commented Feb 20, 2026

weird .. something went wrong with the history of this branch, it should only affect 3 5 files. investigating

@dhsorens
Copy link
Collaborator Author

ok, fixed - now properly ready for review.

@github-actions
Copy link

🤖 AI Review (with external context)

🤖 AI Review

Overall Summary:
While the refactoring and maintenance updates to the existing library are verified as correct, the new Bivariate polynomial implementation is rejected due to critical logic and type errors. Specifically, the swap function incorrectly performs an identity transformation instead of transposing variables, and there is a pervasive type mismatch treating the CBivariate alias as a structure via invalid .val accessors. These issues in CompPoly/Bivariate/* must be resolved before merging.


📄 **Review for `CompPoly.lean`**

Hello. I have reviewed the changes in CompPoly.lean.

1. Analysis of Implementation

The changes in this PR involve adding two new import statements to the CompPoly.lean file:

  • import CompPoly.Bivariate.Basic
  • import CompPoly.Bivariate.ToPoly

Logic and Consistency:
This file appears to serve as a library root or "barrel file," aggregating sub-modules for users of the library. The existing imports include CompPoly.Univariate.* modules. The addition of CompPoly.Bivariate.* modules is structurally consistent with the existing pattern and indicates the exposure of bivariate polynomial formalizations to the library's public API.

Specification Compliance:
While no specific external document was provided regarding the addition of bivariate polynomials, the naming convention mirrors the existing Univariate modules.

2. Check for Misformalization

  • Off-by-One Errors: N/A (Import statements).
  • Recursive Definitions: N/A.
  • Incorrect Assumptions: N/A.
  • Prop vs Type: N/A.
  • Universe Levels: N/A.
  • Circular Dependencies: Since the code compiles (as stated in the prompt), there are no circular dependency cycles introduced by these imports.

3. Verdict & Feedback

Verdict: The formalization is correct.

The changes are low-risk and structural, properly integrating the new Bivariate modules into the top-level library entry point. No misformalization issues are present.

📄 **Review for `CompPoly/Bivariate/Basic.lean`**

This review focuses on CompPoly/Bivariate/Basic.lean.

The file introduces a computable bivariate polynomial type CBivariate R defined as CPolynomial (CPolynomial R). While the structural definition is sound, there are critical issues in the implementation of the swap operation and pervasive type errors regarding the accessor .val.

1. Critical Misformalization in swap

The swap function, intended to swap the roles of variables $X$ and $Y$ (mapping $P(X, Y) \mapsto P(Y, X)$), is implemented incorrectly. The current implementation reconstructs the original polynomial rather than transposing the variables.

  • Current Logic:
    It iterates over the support of $f$. For each term $c_{i,j} X^i Y^j$ (where $j$ comes from the outer polynomial index and $i$ from the inner), it constructs:
    let coeffJ = ... monomial i c $\to c X^i$
    monomial j coeffJ $\to (c X^i) Y^j$
    This results in the identity transformation.

  • Correct Logic:
    To swap variables, the coefficient $c_{i,j}$ associated with $X^i Y^j$ in the original polynomial must be associated with $X^j Y^i$ in the result (conceptually mapping $X \to Y$ and $Y \to X$, where the inner variable is always "X" and outer always "Y" in the data structure). The resulting term should be constructed via monomialXY j i c (since monomialXY n m creates $X^n Y^m$).

  • Correction:

    def swap [DecidableEq R] (f : CBivariate R) : CBivariate R :=
      (CPolynomial.support f).sum (fun j =>
        let inner := CPolynomial.coeff f j
        (CPolynomial.support inner).sum (fun i =>
          monomialXY j i (CPolynomial.coeff inner i)))

2. Pervasive Type Error (.val)

Issue:
The code consistently uses an accessor .val on variables of type CBivariate R (e.g., f.val.coeff, f.val.natDegree, f.val.eval).
However, CBivariate is defined as a definition alias, not a structure:

def CBivariate ... := CPolynomial (CPolynomial R)

Unless CPolynomial is a structure possessing a field named val (which is non-standard for polynomial types in Lean/Mathlib and inconsistent with the direct usage of f in CPolynomial.support f in supportX), f itself is the polynomial. Accessing f.val is likely a type error.

Recommendation:
Remove .val accessors throughout the file. Functions like coeff, natDegree, and eval should be called directly on f (or qualified as CPolynomial.coeff f ... if namespace resolution requires it).

3. Other Findings

  • leadingCoeffX: This definition relies on swap. It is mathematically correct as (f.swap).leadingCoeffY only if swap is fixed. With the current broken swap, leadingCoeffX returns leadingCoeffY.
  • evalX: The logic monomial j (eval a (coeff j)) correctly computes $\sum P_j(a) Y^j$, which represents the polynomial in $Y$ obtained by setting $X=a$. This is correct.

Verdict & Corrected Code

Verdict: Incorrect (Critical logic error in swap, Major type errors).

Corrected Code Snippets:

Below are the corrected definitions for swap and coeff (as an example of removing .val).

/-- Swap the roles of X and Y. -/
def swap [DecidableEq R] (f : CBivariate R) : CBivariate R :=
  (CPolynomial.support f).sum (fun j =>
    let inner := CPolynomial.coeff f j
    (CPolynomial.support inner).sum (fun i =>
      monomialXY j i (CPolynomial.coeff inner i)))

/-- Coefficient of `X^i Y^j` in the bivariate polynomial. -/
def coeff (f : CBivariate R) (i j : ℕ) : R :=
  CPolynomial.coeff (CPolynomial.coeff f j) i

Note: Ensure CPolynomial.coeff, CPolynomial.support, etc., are accessible. If CBivariate is opened or CPolynomial namespace is used, qualification may be adjusted, but the .val indirection must be removed.

📄 **Review for `CompPoly/Bivariate/ToPoly.lean`**

This review covers the new file CompPoly/Bivariate/ToPoly.lean.

Verdict: Incorrect

The formalization logic regarding the conversion between the custom CBivariate type and Mathlib's R[X][Y] appears algorithmically correct, but there are inconsistent usages of the CBivariate type that will likely cause type errors. The implementation treats CBivariate as a wrapper structure in some places and as the underlying polynomial type in others.

Issues Found

1. Critical Type Mismatch in toPoly

  • Location: toPoly, line 40.
  • Code: (CPolynomial.support p).sum ... vs p.val.coeff j.
  • Analysis:
    • The usage p.val.coeff j implies p (of type CBivariate R) is a structure or Subtype wrapping an underlying polynomial (accessible via .val).
    • However, CPolynomial.support p passes p directly to a function that likely expects a CPolynomial. If p is a wrapper, CPolynomial.support should typically be called on p.val.
    • While a coercion might exist, mixing direct access (p.val) and coerced access (support p) is fragile and suggests a misunderstanding of the type definition.
  • Correction: Assuming CBivariate is a wrapper, usage should be consistent: CPolynomial.support p.val.

2. Return Type Mismatch in ofPoly

  • Location: ofPoly, lines 50-52.
  • Code: The function body is a Finset.sum of monomials.
  • Analysis:
    • CPolynomial.monomial typically constructs a CPolynomial.
    • The sum of CPolynomials is a CPolynomial (via AddCommMonoid).
    • The function signature defines the return type as CBivariate R.
    • If CBivariate R is a structure/subtype (as implied by p.val in toPoly), the raw sum of polynomials cannot be returned directly. It must be wrapped in the CBivariate constructor (e.g., ⟨ ... ⟩), possibly requiring a proof if CBivariate enforces invariants (like trimming).
  • Correction: Wrap the result in the appropriate constructor for CBivariate.

3. Ambiguity in ofPoly Constraints

  • Location: ofPoly signature.
  • Observation: The constraint [DecidableEq R] is correctly added because cj.toImpl (converting a sparse Mathlib polynomial to a dense implementation) usually requires checking coefficients for equality with zero (trimming). This is a correct addition, not an issue.

Recommended Fixes

Assuming CBivariate R is a structure wrapping CPolynomial (Subtype ...):

-- In toPoly
noncomputable def toPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R]
    (p : CBivariate R) : R[X][Y] :=
  -- FIX: Access p.val for support
  (CPolynomial.support p.val).sum (fun j =>
    monomial j (CPolynomial.toPoly (p.val.coeff j)))

-- In ofPoly
def ofPoly {R : Type*} [BEq R] [LawfulBEq R] [Nontrivial R] [Ring R]
    [DecidableEq R] (p : R[X][Y]) : CBivariate R :=
  -- FIX: Wrap the sum in the CBivariate constructor (assuming simple structure or subtype)
  ⟨(p.support).sum (fun j =>
    let cj := p.coeff j
    CPolynomial.monomial j ⟨cj.toImpl, CPolynomial.Raw.trim_toImpl cj⟩)⟩

If CBivariate R is instead defined as abbrev CBivariate R := CPolynomial ..., then p.val in toPoly is incorrect and should be removed. Given the explicit usage of p.val, the structure/subtype interpretation is most likely intended.

📄 **Review for `CompPoly/Data/List/Lemmas.lean`**

This is a review of the changes in CompPoly/Data/List/Lemmas.lean.

1. Implementation Analysis

The diff modifies the proof of getLastI_append_single. The theorem states that for a list l and an element x, the "inhabited last" element of l ++ [x] is x.

  • Theorem Statement: (l ++ [x]).getLastI = x
    • Context: [Inhabited α] is present, which is required for getLastI to return a default value if the list were empty (though l ++ [x] is never empty).
  • Old Proof: Relied on List.getLastI_eq_getLast? (likely relating getLastI to Option.get! or similar) and Option.some_or.
  • New Proof:
    • Replaces List.getLastI_eq_getLast? with List.getLastI_eq_getLast?_getD. This suggests getLastI is formally treated as l.getLast?.getD default.
    • Adds Option.getD_some.
    • Retains List.getLast?_append, List.getLast?_singleton.

Logical Flow:

  1. List.getLastI_eq_getLast?_getD transforms the LHS to (l ++ [x]).getLast?.getD default.
  2. List.getLast?_append and List.getLast?_singleton reduce (l ++ [x]).getLast? to some x.
  3. The expression becomes (some x).getD default.
  4. Option.getD_some simplifies (some x).getD default to x.
  5. x = x closes the goal.

The change aligns with standard library updates in Lean 4 (Mathlib4), where definitions involving "Inhabited" accessors often standardize on Option.getD.

2. Misformalization Checklist

  • Off-by-One Errors: None. The logic correctly identifies the last element of a list appended with a singleton.
  • Recursive Definitions: N/A.
  • Incorrect Assumptions/Side-Conditions: The [Inhabited α] instance is correctly present. While l ++ [x] is provably non-empty and logically doesn't require a default value to extract the last element, the definition of getLastI structurally requires the Inhabited instance. The theorem is sound.
  • Prop vs. Type: Correct usage.
  • Universe Levels: No universe manipulation involved.

3. Verdict & Feedback

Verdict: Correct.

The changes represent a correct update to the proof script, likely adjusting for changes in underlying library definitions (specifically relating getLastI to Option.getD). The logic is sound and the formalization is robust.

Feedback:

  • The update correctly handles the reduction of getLastI via getD.
  • No issues found.
📄 **Review for `CompPoly/Fields/Binary/AdditiveNTT/AdditiveNTT.lean`**

You have asked me to review the changes in CompPoly/Fields/Binary/AdditiveNTT/AdditiveNTT.lean from a pull request. The changes primarily involve updating lemma names to match recent library conventions (refactoring Fin.coe_* to Fin.val_*) and completing a proof by filling a sorry.

1. Analysis of Changes

  • Lemma Renaming (Maintenance):

    • Throughout the file, Fin.coe_cast is replaced with Fin.val_cast, Fin.coe_castSucc with Fin.val_castSucc, etc.
    • Similarly, Finsupp.equivFunOnFinite_symm_apply_toFun is updated to Finsupp.coe_equivFunOnFinite_symm.
    • Verdict: These are syntactic updates to align with Mathlib naming conventions (preferring val over coe when referring to the struct field .val, and aligning Finsupp coercion lemmas). Since the code compiles, these changes are correct and semantic-preserving.
  • Proof Script Cleanup:

    • In getSDomainBasisCoeff_of_iteratedQuotientMap, rfl is added, and some simp only calls are consolidated or replaced with rw.
    • Verdict: These are harmless simplifications of the proof tactics.
  • Filling sorry in foldl_NTTStage_inductive_aux:

    • Context: The base case (| zero) of an induction on k : Fin (ℓ + 1).
    • Change:
      simp only [Fin.coe_ofNat_eq_mod, Nat.zero_mod, Fin.foldl_zero, tsub_zero]
      exact invariant_init
    • Logic: Fin.foldl with 0 iterations acts as the identity on the accumulator. The goal requires showing that the initial state satisfies the invariant, which is given by invariant_init. The simplifiers Fin.foldl_zero (reducing the fold) and tsub_zero (handling index arithmetic) correctly reduce the goal to the hypothesis invariant_init.
    • Verdict: Correct.
  • Removal of Commented Code:

    • A commented out -- sorry was removed in NTTStage_correctness.
    • Verdict: Housekeeping.

2. Check for Misformalization

  • Off-by-One Errors: The changes involving Fin (e.g., Fin.val_castSucc, Fin.val_last) are name changes for existing lemmas. No indices or bounds were logically altered in the diff.
  • Recursive Definitions: The induction base case for foldl is standard and correctly handled.
  • Incorrect Assumptions: The proof relies on invariant_init being available in the context, which is consistent with the induction setup.

3. Verdict

The changes in this file are Correct. They represent necessary maintenance to keep the code compiling with updated library versions and the completion of a proof obligation.

Summary:

  • Formalization Status: Correct.
  • Issues Found: None.
📄 **Review for `CompPoly/Fields/Binary/BF128Ghash/Basic.lean`**

The review of the changes in CompPoly/Fields/Binary/BF128Ghash/Basic.lean indicates that the modifications are primarily technical adjustments to the proof script to handle type unification and elaboration robustness, likely necessitating manual alignment between Fintype.card (ZMod 2) and the numeral 2. The mathematical logic remains consistent with Rabin's Irreducibility Test.

Verdict: Correct

Analysis:

  1. Robustness Fixes (ZMod.card vs 2): The main changes involve explicit rewrites of ZMod.card. The theorem Polynomial.irreducible_dvd_X_pow_sub_X_iff_natDegree_dvd is stated in terms of Fintype.card R. In this context (R = ZMod 2), the cardinality is 2. The previous code relied on implicit unification, which seemingly failed or became brittle. The new code explicitly rewrites ZMod.card to 2 inside tactic blocks (by rw [ZMod.card]; exact ...) to satisfy the type checker. This is a correct and standard way to fix such elaboration issues.
  2. Rewrite Strategy (CharTwo.sub_eq_add): The change from rw to have := ...; simp only and the use of convert later in the proof improves the script's stability against potential unification failures or ambiguity when applying the characteristic 2 property (a - b = a + b).
  3. Recursion Depth: set_option maxRecDepth 1000 is a benign configuration change often required for complex tactic proofs.
  4. Mathematical Correctness: The logic follows the standard irreducibility test:
    • Hypothesis: P is degree 128.
    • Assumption: P is reducible.
    • Implication: There exists an irreducible factor q with deg(q) ≤ 64.
    • Trace condition implies q ∣ X^(2^128) - X, so deg(q) ∣ 128.
    • deg(q) ≤ 64 and deg(q) ∣ 128 implies deg(q) ∣ 64.
    • This implies q ∣ X^(2^64) - X (and thus X^(2^64) + X).
    • Contradiction: q divides gcd(X^(2^64) + X, P), but the GCD is given as 1.
    • The formalization correctly maps to this logic.

No misformalization issues were found.

📄 **Review for `CompPoly/Fields/Binary/BF128Ghash/Impl.lean`**

The review of the file CompPoly/Fields/Binary/BF128Ghash/Impl.lean follows.

1. Analysis of Implementation Changes

The changes in this Pull Request are primarily focused on refactoring proof scripts to utilize the grind tactic (likely an automated reasoning/normalization tactic available in the project's environment) and removing redundant tactic calls.

  • set_option maxRecDepth 1500:

    • Logic: Increases the maximum recursion depth for the elaborator.
    • Context: Cryptographic formalizations involving bit vectors and polynomials (especially with tactics like omega or heavy type-class synthesis) often require higher recursion limits. This is a configuration change to enable the subsequent automated proofs.
  • fold_step_mod_eq:

    • Diff: Replaced congr 1; congr 1 with grind.
    • Correctness: The previous manual congruence steps were likely stripping away layers of constructors or function applications. The grind tactic automates this equality checking and normalization. Since the code compiles, grind successfully discharges the goal or prepares it for the subsequent subgoals (indicated by the · bullets).
  • reduce_clMul_correct:

    • Diff: Removed norm_cast at h_deg_R_nat and norm_cast at deg_R.
    • Correctness: The surrounding change tactics (change (n : WithBot ℕ) < (8 : ℕ) at ...) explicitly handle the type viewing/coercion that norm_cast was likely intended to facilitate. If the code compiles without norm_cast, the tactic was redundant or superceded by the change steps and rewrites.
  • toQuot_mul:

    • Diff: Replaced exact h_div with grind.
    • Correctness: The goal of this lemma is toQuot (a * b) = toQuot a * toQuot b. In quotient rings, $x = y$ is defined as $x - y \in I$. The hypothesis h_div establishes exactly that $ghashPoly \mid (a*b){poly} - a{poly} * b_{poly}$. Previously, exact matched this hypothesis to the goal. grind is now used to perform this matching (and potentially unfold definitions if necessary). While exact is more explicit, grind is correct as verified by the compiler.

2. Misformalization Checklist

  • Off-by-One Errors: None. The removal of norm_cast preserves the logical bounds checks (n < 8).
  • Recursive Definitions: No changes to definitions.
  • Incorrect Assumptions: No changes to theorem assumptions.
  • Prop vs. Type: Correct usage maintained.
  • Universe Levels: No changes.

3. Verdict & Feedback

Verdict: Correct

The changes are safe refactorings of the proof scripts. They streamline the code by using automation (grind) and removing unnecessary steps, without altering the definitions or the logical statements of the theorems.

Feedback:

  • Refactoring Quality: The move to grind simplifies the proof script maintenance, assuming grind is stable in the build environment.
  • Style: Replacing exact h_div with grind in toQuot_mul is acceptable, though exact is typically preferred for performance and clarity when the term is already constructed. However, this is a minor stylistic point and does not affect correctness.
📄 **Review for `CompPoly/Fields/Binary/BF128Ghash/Prelude.lean`**

Review of CompPoly/Fields/Binary/BF128Ghash/Prelude.lean

Verdict: Correct

Analysis:
The only change introduced in this pull request for this file is the addition of the compiler option set_option maxRecDepth 550.

  1. Logical Correctness: This change does not alter any definitions, theorems, or logical structures. It adjusts the resource limits of the Lean elaborator/kernel.
  2. Necessity: As noted in the comment (-- for ghashPoly_eq_P_val), this is likely required to handle deep recursion stacks that occur when elaborating proofs or definitions involving large concrete structures, such as 128-bit vectors or polynomials over $GF(2^{128})$, which are common in GHash implementations.
  3. Safety: Increasing the recursion depth is a standard operational adjustment in Lean and does not introduce misformalization risks (it does not change Prop vs Type, universe levels, or logical assumptions).

Conclusion:
The change is a safe configuration update required for compilation. No misformalization issues are present.

📄 **Review for `CompPoly/Fields/Binary/BF128Ghash/XPowTwoPowGcdCertificate.lean`**

This review focuses on the changes in CompPoly/Fields/Binary/BF128Ghash/XPowTwoPowGcdCertificate.lean.

Analysis of Changes

  1. Rewrite Strategy (conv_lhs => rw! to rw!):

    • Old: conv_lhs => rw! (castMode:=.all) [h_256_eq]
    • New: rw! (castMode:=.all) [h_256_eq]
    • Logic: The original code restricted the rewrite 256 = 128 + 128 to the left-hand side (LHS) of the goal. The new code applies it to the entire goal.
    • Correctness: This change is valid. The equality 256 = 128 + 128 holds definitionally in Nat. Applying it globally ensures that Fin 256 becomes Fin (128 + 128) wherever it appears (including potentially the type of gcd_b_0_val on the RHS), which enables the subsequent Fin.sum_univ_add to trigger correctly. Since rw (and rw!) operates on syntactic occurrences but respects definitional equality for the purposes of type-checking, this does not alter the logical statement being proven.
  2. Simp Lemma Removal (Fin.coe_castAdd):

    • Old: simp only [..., Fin.coe_castAdd, ...]
    • New: simp only [..., ...]
    • Logic: Fin.coe_castAdd simplifies ((Fin.castAdd n i) : Nat) to (i : Nat).
    • Correctness: Removing this lemma from simp only means the simplifier will not explicitly reduce the coercion ↑(Fin.castAdd ...) to ↑i at this step. However, Fin.castAdd n i is defined as ⟨i.val, ...⟩, so (Fin.castAdd n i).val is definitionally equal to i.val. Since subsequent tactics (like omega or term construction) work up to definitional equality, explicit simplification is not strictly required for correctness, assuming the proof script still compiles (which is confirmed). This suggests the lemma was redundant or handled elsewhere.

Checklist Verification

  • Off-by-One Errors: The split 256 = 128 + 128 is arithmetically correct and aligns with the bounds check val < 2^128.
  • Assumptions/Side-Conditions: The proof relies on h_gcd_b_0_val_lt (value fits in 128 bits), which is correctly established via omega.
  • Prop vs Type: No issues.
  • Universe Levels: No issues.

Verdict

The formalization in this file is correct. The changes represent a cleanup of the proof script (removing an unnecessary conv scope and a redundant simplification lemma) without altering the semantics of the theorem or introducing logical flaws.

Feedback:

  • The refactoring correctly simplifies the tactic script.
  • The global application of rw! [h_256_eq] is safe as it preserves the definitional properties required for the Fin.sum_univ_add split.
📄 **Review for `CompPoly/Fields/Binary/Common.lean`**

I have reviewed the changes in CompPoly/Fields/Binary/Common.lean. The diff includes a cleanup of toPoly_128_extend_256 and a complete, rigorous rewrite of toPoly_shiftLeft_no_overflow supported by several new helper lemmas.

Analysis of Changes

  1. toPoly_128_extend_256:

    • Logic: The proof logic remains unchanged: splitting the summation range $[0, 256)$ into $[0, 128)$ and $[128, 256)$. The lower part corresponds to the original BitVec 128, and the upper part is shown to be zero because to256 zero-extends.
    • Implementation: The changes update Fin.coe_ lemmas to their newer Fin.val_ counterparts (e.g., Fin.coe_castAdd $\to$ Fin.val_castAdd) and simplify the conv_lhs block. This is a correct maintenance update aligning with Mathlib conventions.
  2. Helper Lemmas:

    • BitVec_getLsb_eq_false_of_toNat_lt_two_pow: Correctly deduces that bits at or above index d are false if the number is strictly less than 2^d.
    • BitVec_getElem_eq_false_of_toNat_lt_two_pow: A convenient wrapper for the above using getElem syntax. The preconditions (n < w and d ≤ n) are handled correctly.
    • BitVec_getLsb_shiftLeft: Correctly characterizes the bits of a <<< shift. Logic: (a <<< shift)[n] = a[n - shift] if shift ≤ n, else false. This accurately reflects the semantics of left shift.
    • toPoly_coeff: Provides a precise characterization of the coefficients of the polynomial generated from a BitVec. The case analysis on n < w and the bit value is correct.
  3. toPoly_shiftLeft_no_overflow:

    • Logic: The theorem states that if a fits in d bits and d + shift ≤ w, then converting a <<< shift to a polynomial is equivalent to (toPoly a) * X^shift.
    • Proof Strategy: The new proof uses extensionality (ext n) to compare coefficients at every index $n$. It effectively breaks down the problem into cases:
      • Case $n &lt; w$: Splits into $n &lt; shift$ (coefficient is 0 for both sides) and $n \ge shift$ (coefficient comes from $a[n-shift]$).
      • Case $n \ge w$: LHS is 0 by definition of toPoly. RHS is 0 because either $n-shift \ge w$ (out of bounds for $a$) or $n-shift \ge d$ (zero because $a &lt; 2^d$).
    • Correctness: The case analysis is exhaustive. The boundary conditions (e.g., n - shift) are handled carefully with appropriate inequalities. The usage of the helper lemmas ensures that the "no overflow" condition (d + shift ≤ w) is correctly applied to show high bits are zero.

Formalization Checklist

  • Off-by-One Errors: None found. Inequalities (e.g., n < w, d ≤ n) are strict where required and inclusive where appropriate.
  • Recursive Definitions: Not applicable.
  • Incorrect Assumptions/Side-Conditions: The hypotheses ha : a.toNat < 2 ^ d and h_no_overflow : d + shift ≤ w are correct and sufficient for the theorem.
  • Prop vs. Type: Standard usage.
  • Universe Levels: Correct.

Verdict

The changes are correct. The refactoring of toPoly_shiftLeft_no_overflow significantly improves the readability and robustness of the proof compared to the previous conv-heavy approach.

Status: Approved

📄 **Review for `CompPoly/Fields/Binary/Tower/Basic.lean`**

This review focuses on the changes in CompPoly/Fields/Binary/Tower/Basic.lean.

General Analysis:
The pull request performs several cleanups (updating Fin.coe_cast to Fin.val_cast, removing unnecessary rewrites, adding rfl to close definitionally equal goals) and significantly refactors a proof in multilinearBasis_apply. The refactoring replaces a somewhat implicit rewrite strategy with a more rigorous one using a new helper lemma algebraMap_𝕏_eq_of_index_eq.

Specific Checklist & Findings:

  1. Off-by-One / Indexing: The use of h_k_sub_1_add_1 and omega in various places confirms that index arithmetic (e.g., k - 1 + 1 = k) is handled correctly. The logic in multilinearBasis_apply carefully manages the relationship between r, r1 (where r = r1 + 1), and l.
  2. Helper Lemma (algebraMap_𝕏_eq_of_index_eq):
    • The new lemma asserts algebraMap (𝕏 k) = algebraMap (𝕏 m) in BTField r given k = m.
    • It correctly uses subst to unify the indices and the dependent algebra instances (binaryAlgebraTower).
    • This is a robust way to handle "cast" issues where indices are provably equal but syntactically distinct, avoiding fragile rewrites in the main proof.
  3. Refactoring of multilinearBasis_apply:
    • The change from a conv_rhs strategy to refine congr_arg₂ is an improvement. It explicitly separates the proof into two goals: showing the product components are equal and showing the exponent base/power components are equal.
    • The usage of congr_arg₂ (· ^ ·) ... h_exp_eq ensures that both the base (via the new lemma) and the exponent are proven equal.
    • The term h_exp_eq relies on h_prevDiff (linking prevDiff to r1 - l) and the fact that r - l - 1 simplifies to r1 - l (given r = r1 + 1). This arithmetic is sound.
  4. Prop vs Type / Defeq: The use of rfl in multiple places indicates that the terms are definitionally equal after the preceding rewrites (rw! or subst). The change from coe_cast to val_cast aligns with Mathlib conventions for Fin.

Verdict:
The formalization is correct. The changes improve the robustness of the proofs and perform necessary maintenance.

Code Quality:

  • The refactored proof in multilinearBasis_apply is clearer and less prone to breaking due to implicit cast behavior.
  • The helper lemma is well-scoped and correctly implemented.

No misformalization issues were found.

Formalization status: Correct

📄 **Review for `CompPoly/Fields/Binary/Tower/Impl.lean`**

Here is the review of the changes in CompPoly/Fields/Binary/Tower/Impl.lean.

Summary

The changes in this PR primarily focus on:

  1. Completing proofs: A previous sorry in join_eq_iff_dcast_extractLsb has been replaced with a complete proof using bitwise manipulations and casting lemmas.
  2. Refactoring for robustness: The proof of towerEquiv_commutes_left_diff has been rewritten to use a clearer transitive inductive step rather than a deep manual expansion of definitions. Similarly, multilinearBasis_apply has been refactored to handle type casts more cleanly.
  3. Maintenance: Updates to tactic usage (Nat.add_eq_zero to Nat.add_eq_zero_iff, BitVec.not_lt_iff_le to BitVec.not_lt) and cleanups (removing unnecessary conv blocks).

Specific Feedback

1. join_eq_iff_dcast_extractLsb (Correctness of the new proof)

  • Verification: The logic establishing that a constructed field element $x = \text{join}(h, l)$ is equivalent to checking if the high and low bits of $x$ match $h$ and $l$ is correct.
  • Implementation: The use of BitVec.extractLsb_dcast_eq and BitVec.eq_append_iff_extract correctly bridges the gap between the dcast types and the bitwise operations. The explicit handling of the h_pos proof term via Subsingleton.elim is a good practice here to ensure rewrite tactics work smoothly with dependent types.

2. towerEquiv_commutes_left_diff (Refactor)

  • Verification: The previous proof attempted to unwind the definitions of the tower maps across d steps manually. The new proof essentially argues that if the diagram commutes for d steps (IH) and for 1 step (new lemma towerEquiv_commutes_left_succ), it commutes for d+1 steps via function composition.
  • Correctness: This algebraic approach is significantly more robust and readable. The helper lemmas BTField_algebraMap_succ_eq_join and towerEquiv_commutes_left_succ correctly isolate the inductive step behavior.

3. Tactic and Library Updates

  • grind Tactic: The use of grind in towerRingHomForwardMap_split_eq is noted. As the code compiles, this is acceptable, though it implies a dependency on a very recent or specific version of Lean/Mathlib that includes this tactic.
  • Standard Library Changes: The updates from Nat.add_eq_zero to Nat.add_eq_zero_iff and BitVec.not_lt are correct adaptations to standard library changes.

Checklist Review

  • Off-by-One Errors: Indices involving $k, k-1$, and ranges like $2^k-1$ appear correct and consistent with BitVec indexing (0-based) and the field tower definitions.
  • Recursive Definitions: The induction in towerEquiv_commutes_left_diff correctly covers the base case ($d=0$) and successor case.
  • Incorrect Assumptions: h_pos : k > 0 is correctly propagated to ensure operations on $k-1$ are valid.
  • Prop vs Type: No issues found.
  • Universe Levels: No issues found.

Verdict

Correct.
The formalization in this file is correct. The changes successfully resolve a missing proof (sorry) and improve the maintainability of existing proofs.

No changes are requested.

📄 **Review for `CompPoly/Fields/PrattCertificate.lean`**

1. Analyze Implementation

  • Change Analysis: The diff modifies the implementation of the pratt tactic in CompPoly/Fields/PrattCertificate.lean. specifically within the tactic's elaboration block.
  • Logic: The line let ⟨n', pn⟩ ← deriveNat n q(instAddMonoidWithOneNat) is updated to let ⟨n', pn⟩ ← deriveNat n q(Nat.instAddMonoidWithOne).
  • Context: This line invokes deriveNat, a meta-programming function likely responsible for normalizing the Nat expression n into a raw natural number n' and providing a proof pn that n = n'. To construct this proof or perform the normalization, it requires an instance of AddMonoidWithOne Nat.
  • Consistency: The change replaces the identifier instAddMonoidWithOneNat with Nat.instAddMonoidWithOne. This reflects standard refactoring patterns in the Lean 4 ecosystem/Mathlib, where top-level instance names are often moved into the namespace of the type they instance (i.e., instAddMonoidWithOneNat becoming Nat.instAddMonoidWithOne). Since the prompt confirms the code compiles, this change correctly aligns with the available definitions in the project's dependencies.

2. Check for Misformalization

  • Off-by-One Errors: Not applicable. The change is a namespace update for an instance.
  • Recursive Definitions: Not applicable.
  • Incorrect Assumptions/Side-Conditions: The underlying assumption is that Nat forms an AddMonoidWithOne, which remains true. The instance passed is simply the canonical one.
  • Prop vs. Type: No misuse.
  • Universe Levels: No impact.

3. Verdict & Feedback

Verdict: Correct.

Feedback:
The change is a necessary maintenance update to resolve a reference to a renamed or standardized instance identifier. Replacing instAddMonoidWithOneNat with Nat.instAddMonoidWithOne ensures the tactic utilizes the correct canonical instance for AddMonoidWithOne Nat as defined in current library versions. As the code compiles, there are no logical issues or misformalizations introduced by this change.

📄 **Review for `CompPoly/Multivariate/MvPolyEquiv.lean`**

Start with the Analysis of Implementation.

1. Analysis of Implementation

The changes in CompPoly/Multivariate/MvPolyEquiv.lean are strictly limited to the proof script of the lemma degreeOf_equiv.

  • Refactoring Proof Automation: The primary change replaces calls to the aesop tactic with the grind tactic.
    • In the isTrue g branch of the match expression, aesop is replaced by grind.
    • In the isFalse g branch, inside the split at h block, a complex aesop call (aesop (add safe cases Fin) (add safe (by simp only at g''))) is replaced by grind.
  • Logic Preservation: Since the instructions confirm the code compiles, the grind tactic successfully discharges the proof obligations that were previously handled by aesop. This suggests that the logical steps required to prove the goal are within the capabilities of the grind automation in this context.
  • Whitespace: There is a minor whitespace cleanup removing an empty line at the end of the file.

2. Check for Misformalization

  • Off-by-One Errors: Not applicable. No arithmetic definitions or indices are modified; only the proof method is changed.
  • Recursive Definitions: Not applicable.
  • Incorrect Assumptions/Side-Conditions: The theorem statement for degreeOf_equiv remains unchanged. The preconditions and hypotheses are identical to the previous version.
  • Prop vs. Type: No changes to type declarations.
  • Universe Levels: No changes to universe polymorphism.

The change does not alter what is being proven, only how it is proven. Therefore, no misformalization is introduced regarding the mathematical content.

3. Verdict & Feedback

Verdict: Correct

The changes are a valid refactoring of the proof script. Replacing specific aesop configurations with grind simplifies the code structure while maintaining correctness (evidenced by the code compiling). There are no changes to definitions or theorem statements that would impact the formalization specification.

Feedback:

  • The transition from aesop to grind simplifies the proof block significantly, removing the need for manual hint addition (add safe cases Fin, etc.).
  • The removal of the trailing blank line is a minor stylistic improvement.
📄 **Review for `CompPoly/ToMathlib/Finsupp/Fin.lean`**

Review of CompPoly/ToMathlib/Finsupp/Fin.lean

1. Analysis of Implementation

The changes in this file are restricted to the proofs of two simplification theorems: snoc_zero_zero and insertNth_zero_zero.

  • Hunk 1 (snoc_zero_zero):

    • Change: Replaced equivFunOnFinite_symm_apply_toFun with coe_equivFunOnFinite_symm in the simp only list.
    • Logic: The theorem proves that appending a zero element to a zero Finsupp vector results in a zero Finsupp vector. The proof strategy is by extensionality (ext a). The simplifier unpacks the definition of snoc. The snoc operation on Finsupp (over a finite domain) typically utilizes the equivalence equivFunOnFinite : (Fin n →₀ M) ≃ (Fin n → M). The definition likely involves mapping to functions, performing Fin.snoc, and mapping back via symm. To prove equality to 0, simp needs to reduce the application of the inverse equivalence.
    • Context: The lemma rename from ..._apply_toFun to coe_... suggests an alignment with Lean 4 / Mathlib naming conventions where functional objects are accessed via coercion () rather than explicit field accessors (.toFun) or specific application lemmas.
  • Hunk 2 (insertNth_zero_zero):

    • Change: Replaced equivFunOnFinite_symm_apply_toFun with coe_equivFunOnFinite_symm in the simp only list.
    • Logic: Similar to the previous theorem, this asserts that inserting a zero at the $n$-th position of a zero vector results in a zero vector. The logic mirrors snoc, relying on the interaction between Finsupp and the underlying function type.

2. Check for Misformalization

  • Off-by-One Errors: Not applicable. The change only affects the proof script, not the indices or boundaries of Fin n.
  • Recursive Definitions: None involved.
  • Incorrect Assumptions: The theorems rely on [Zero M], which is standard and implicitly present via the usage of 0. No assumptions were modified.
  • Prop vs. Type: No issues.
  • Universe Levels: No issues.
  • Compilation: The prompt confirms the code compiles. This implies coe_equivFunOnFinite_symm is a valid lemma in the context that successfully discharges the goal when combined with the other lemmas in the simp set.

3. Verdict & Feedback

Verdict: Correct

The changes represent a maintenance update, likely reflecting a refactor in the underlying library (Mathlib) or local definitions where a lemma regarding the application of equivFunOnFinite.symm was renamed or standardized to use coercions (coe). The logic of the theorems snoc_zero_zero and insertNth_zero_zero remains correct, and the proof scripts have been updated to use the appropriate lemma to close the goals.

No misformalizations found.

📄 **Review for `CompPoly/Univariate/Basic.lean`**

This is a rigorous review of the changes in CompPoly/Univariate/Basic.lean.

1. Analysis of Implementation

  • BEq Instance: The implementation beq p q := p.val == q.val is correct. Since CPolynomial R is a subtype defined by the canonicalization predicate p.trim = p, logical equality of polynomials implies structural equality of their underlying canonical arrays. Thus, checking the underlying arrays is sufficient.
  • LawfulBEq Instance:
    • eq_of_beq: The proof correctly lifts the equality from the subtype's value (p.val = q.val) to the subtype itself (p = q) using Subtype.ext.
    • rfl: The logic is sound (it relies on the underlying LawfulBEq), but the proof script provided (convert h_raw _, swap, exact #[]) is unnecessarily convoluted and brittle, likely a relic of an interactive proof session. It does not affect correctness but is poor code quality.
  • DecidableEq Instance: The instance correctly leverages the underlying DecidableEq of the raw representation (implied Array R) and the decidability of the predicate p.trim = p.
  • Nontrivial Instance: The proof correctly identifies 0 (empty array, size 0) and 1 (canonical #[1], size 1) as distinct elements based on their underlying array sizes. This relies on Nontrivial R ensuring 1 ≠ 0 so that Raw.C 1 does not collapse to an empty array.
  • coeff_mul_X_succ Lemma: The removal of omega suggests the goal n < n + 2 is now handled by simp (likely via Finset.mem_range simplification or ambient arithmetic simplifiers). As the code compiles, this is a valid optimization.

2. Checklist for Misformalization

  • Off-by-One Errors: None found. The Nontrivial proof correctly distinguishes size 0 and size 1.
  • Recursive Definitions: N/A.
  • Incorrect Assumptions:
    • The LawfulBEq instance correctly requires [LawfulBEq R].
    • The Nontrivial instance correctly requires [Nontrivial R].
  • Type Hierarchy / Universe Levels: The universe levels are implicitly handled correctly (Type*).
  • Redundancy: In the DecidableEq instance, the constraint [BEq R] is redundant because [DecidableEq R] is also present. DecidableEq R is sufficient to derive decidable equality for the arrays, and BEq is not used explicitly in the definition provided (Subtype.instDecidableEq uses propositional equality decidability).

3. Verdict & Feedback

Verdict: The formalization is Correct.

There are no critical logical errors or misformalizations. However, there are minor code quality issues regarding style and redundancy.

Minor Issues:

  1. Redundant Typeclass Constraint:
    In the DecidableEq instance, [BEq R] is unnecessary because [DecidableEq R] is already required.

    -- Current
    instance (R : Type*) [BEq R] [DecidableEq R] [Semiring R] : DecidableEq (CPolynomial R) ...
    -- Suggested
    instance (R : Type*) [DecidableEq R] [Semiring R] : DecidableEq (CPolynomial R) ...
  2. Obscure Proof Script (LawfulBEq):
    The rfl proof in LawfulBEq is messy. It creates a local lemma h_raw, applies it to a hole, and then forcibly fills that hole with an empty array. It essentially proves a == a by a roundabout way that ends up just using simp.

    -- Current
    rfl := by
      have h_raw : ∀ (a : CPolynomial.Raw R), a == a ↔ a = a := by
        aesop
      convert h_raw _
      swap
      exact #[]
      simp +decide [ BEq.beq ]
    
    -- Suggested (Cleaner)
    rfl := by
      simp [BEq.beq]
📄 **Review for `CompPoly/Univariate/Quotient.lean`**

Here is the review of the changes in CompPoly/Univariate/Quotient.lean.

Verdict

Formalization Correct. The logic implemented in the new proof is sound and correctly formalizes the theorem statement. However, there are code quality issues regarding variable shadowing and tactic usage that should be addressed to maintain engineering standards.

Analysis of Implementation

The new proof strategy refactors the previous explicit index construction into a list manipulation argument:

  1. Decomposition: It decomposes the zipped index list of p into a prefix of length p.trim.size and a suffix (witness l).
  2. Prefix Equality: It proves the prefix is equal to p.trim.zipIdx.toList. This relies on the fact that p.trim and p share the same coefficients within the bounds of p.trim, and that the indices in zipIdx are absolute. This logic is sound.
  3. Suffix Property: It proves that the remaining elements in the suffix l have coefficients of 0. This relies on the definition of trim (elements beyond the trimmed size are zero) and the fact that l contains elements with indices ≥ p.trim.size.
  4. Boundaries: The use of take and drop with p.trim.size correctly handles the list splitting. Since p.trim is a contraction of p, p.trim.size ≤ p.size, preventing out-of-bounds issues during the split.

Findings

While no logical misformalizations were found, the following code quality issues were identified:

1. Variable Shadowing (Minor Issue)
In the second main subgoal (proving ∀ x ∈ l, x.1 = 0), the variable name h_trailing is defined four separate times within the same tactic block.

  • Definitions 1 & 2: Relate to coefficients being zero.
  • Definition 3: Relates list membership to coefficient lookups.
  • Definition 4: Relates list membership to index bounds.
    While valid in Lean (the latest hypothesis shadows the previous ones), this makes the proof difficult to read, maintain, or debug.

2. Redundant Tactic Usage (Trivial Issue)
Inside the proof of h_trim, the code uses:

exact (by exact coeff_eq_coeff p i)

This is a redundant nested tactic block. It should simply be exact coeff_eq_coeff p i.

Suggestions

I recommend applying the following cleanup to resolve the shadowing and style issues while retaining the correct logic:

lemma zipIdx_trim_append {R : Type*} [Ring R] [BEq R] [LawfulBEq R]
    (p : CPolynomial.Raw R) :
    ∃ l, p.zipIdx.toList = p.trim.zipIdx.toList ++ l ∧ ∀ x ∈ l, x.1 = 0 := by
  have h_zipIdx : (p.zipIdx.toList.take p.trim.size) = (p.trim.zipIdx.toList) := by
    have h_coeff_eq : p.trim.coeff = p.coeff := by
      ext i; exact coeff_eq_coeff p i
    refine' List.ext_get _ _ <;> simp
    · exact size_le_size p
    · intro n _ _
      -- Use the coefficient equality to show the zipped pairs are identical
      have := congr_fun h_coeff_eq n
      aesop
  
  -- Define the witness as the dropped suffix
  refine' ⟨ List.drop (Array.size p.trim) (Array.zipIdx p |> Array.toList), _, _ ⟩ <;> simp_all
  · -- Prove the list concatenation equality
    rw [← h_zipIdx, List.take_append_drop]
  · -- Prove all elements in the suffix have 0 coefficient
    intro a b hab
    have h_zero_after_trim : ∀ i ≥ p.trim.size, p.coeff i = 0 := by
      intro i hi
      rw [Eq.symm (coeff_eq_coeff p i)]
      -- Assuming standard library behavior where out-of-bounds/trimmed coeffs are 0
      grind 
    have h_mem_coeff : ∀ x ∈ List.drop (Array.size p.trim) (Array.zipIdx p |> Array.toList),
        x.1 = p.coeff x.2 := by
      intro x hx
      have : ∀ y ∈ List.zipIdx p.toList, y.1 = p.coeff y.2 := by
        simp +decide [List.mem_iff_get]
        grind
      convert this x _
      exact List.mem_of_mem_drop hx
    have h_idx_ge : ∀ x ∈ List.drop (Array.size p.trim) (Array.zipIdx p |> Array.toList),
        x.2 ≥ p.trim.size := by
      intro x hx; have := List.mem_iff_get.mp hx; aesop
    aesop
📄 **Review for `CompPoly/Univariate/Raw.lean`**

This review focuses on the changes in CompPoly/Univariate/Raw.lean.

1. Analysis of Changes

The change occurs within the proof of the theorem coeff_eq_getElem_of_lt.
A single line exact Nat.succ_le_of_lt k.is_lt is removed from the proof of a local hypothesis h_size.

Diff Analysis:

  • Original Code:

    have h_size : k + 1 = (p.extract 0 (k + 1)).size := by
      simp [Array.size_extract]
      exact Nat.succ_le_of_lt k.is_lt

    The original proof used simp to presumably reduce the expression and then manually supplied the inequality k + 1 ≤ p.size (derived from k.is_lt) to close the goal.

  • New Code:

    have h_size : k + 1 = (p.extract 0 (k + 1)).size := by
      simp [Array.size_extract]

    The manual exact step is removed.

  • Correctness Logic:
    The hypothesis h_size asserts that the size of the extracted subarray p.extract 0 (k + 1) is exactly k + 1.
    For an array p, (p.extract 0 (k + 1)).size evaluates to min (k + 1) p.size.
    The equality k + 1 = min (k + 1) p.size holds if and only if k + 1 ≤ p.size.
    The term k.is_lt (likely referring to k : Fin p.size) provides the proof that k < p.size, which is definitionally equivalent to k + 1 ≤ p.size.
    Since the code compiles, the simp [Array.size_extract] tactic is now sufficient to locate the necessary bound in the context or simplify the min expression automatically.

2. Misformalization Checklist

  • Off-by-One Errors: The logic relies on k < p.size (from k.is_lt) implying k + 1 ≤ p.size. This is standard natural number arithmetic and is correct.
  • Recursive Definitions: N/A.
  • Incorrect Assumptions/Side-Conditions: The type of h_size is explicitly ascribed (k + 1 = ...), ensuring the semantic meaning of the hypothesis remains unchanged. The removal of the proof line does not weaken the statement being proven, only the automation used to prove it.
  • Prop vs. Type: No issues.
  • Universe Levels: No issues.

3. Verdict & Feedback

The formalization is Correct.

The change is a valid cleanup of the proof script. The removal of exact Nat.succ_le_of_lt k.is_lt indicates that the simplifier is able to discharge the goal automatically, likely by finding the bound in the local context. The integrity of the theorem coeff_eq_getElem_of_lt is preserved because the type of the intermediate hypothesis h_size has not changed.

Feedback:

  • The change correctly simplifies the proof by relying on simp to handle the arithmetic bound check.
  • The logic remains sound: (p.extract 0 (k+1)).size is indeed k+1 provided k < p.size, which is guaranteed by k being a Fin p.size.
📄 **Review for `lakefile.lean`**

Review of lakefile.lean

1. Analysis of Implementation

The changes in this file constitute a dependency update for the project CompPoly.

  • Hunk 1:
    • Updates the mathlib dependency from v4.26.0 to v4.28.0.
    • Updates the ExtTreeMapLemmas dependency from v4.26.0 to v4.28.0.

This appears to be a synchronized upgrade of the toolchain/library versions. Assuming ExtTreeMapLemmas follows a versioning scheme tied to Mathlib/Lean versions (which is common practice in the Lean ecosystem to ensure ABI compatibility), updating both to the same version tag (v4.28.0) is the correct procedure to maintain compatibility.

2. Check for Misformalization

As this is a build configuration file using the Lake DSL, the standard logical verification checklist applies differently:

  • Syntax and Semantics: The syntax adheres to valid Lake configuration standards.
  • Version Consistency: The update maintains version parity between the core library (mathlib) and the auxiliary library (ExtTreeMapLemmas). This minimizes the risk of diamond dependency conflicts or ABI mismatches, provided ExtTreeMapLemmas v4.28.0 is indeed built against mathlib v4.28.0.
  • Compilation: As the instructions state the code compiles, the git tags are presumed to exist and be reachable.

There are no logical definitions, theorems, or proofs in this file, so there are no risks of logical off-by-one errors, incorrect universe levels, or improper use of Prop/Type.

3. Verdict & Feedback

Verdict: Correct

The changes correctly update the project dependencies to a newer version. No formalization issues are present.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants