-
Notifications
You must be signed in to change notification settings - Fork 53
test: add enhanced meta corruption investigation infrastructure #1165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
## Key Improvements
1. **Enhanced LRUMap Usage**:
- Use cement's LRUMap with maxEntries (50) for memory management
- Add automatic cleanup via onDelete callback to prevent memory leaks
- Set up proper object URL revocation when entries are evicted
2. **Fixed Key Generation**:
- Use namespaced keys: `cid:${cid}` for DocFileMeta, `file:${metadata}` for File objects
- Prevents cache key collisions between different object types
- Ensures stable content identity for database-sourced files
3. **Improved Cache Strategy**:
- CID-based keys for DocFileMeta objects (truly stable content identity)
- File-metadata keys for direct File objects (backwards compatibility)
- Proper cleanup ordering to prevent image flickering
## Technical Details
- Addresses failing test: "does not cleanup blob URL when DocFileMeta returns new objects"
- Uses cement v0.4.35 LRUMap features (maxEntries + onDelete callbacks)
- Maintains backwards compatibility for File objects while optimizing for DocFileMeta
- Prevents memory leaks through automatic object URL cleanup
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <[email protected]>
- Fixed double cleanup by removing automatic LRUMap onDelete callback - Added stable key tracking to prevent unnecessary cleanup on same CID - Progress: 8/10 tests passing, 2 tests still need fixes - Fixed same-CID tests but broke unmount test - needs refinement Next: Properly handle unmount vs rerender cleanup distinction
- Fixed unmount cleanup by always running useEffect cleanup - Fixed double cleanup issue from LRUMap eviction - 8/10 tests passing: unmount works, cache management works - 2/10 tests failing: same-CID tests expect NO revocation ever The tests require that same content never triggers URL.revokeObjectURL calls, which is stricter than preventing double cleanup. Need final approach to prevent revocation while allowing unmount cleanup.
## Major Improvements ### ✅ Content-Aware Cleanup Logic - File objects: Always cleanup on useEffect change (includes unmount) - DocFileMeta objects: Only cleanup when CID actually changes - Prevents unnecessary URL revocation for same content ### ✅ Enhanced Type Safety - Cleanup functions now store both contentKey and revoke function - Type-safe handling of different cleanup object structures - Proper return types from loadFile function ### ✅ Test Results (9/10 passing) - ✅ Same CID tests: No unnecessary cleanup (0 revoke calls) - ✅ File object tests: Proper cleanup on unmount (1 revoke call) - ✅ Cross-type tests: Correct transitions between File/DocFileMeta - ❌ Different CID test: Still needs nextContentKey tracking fix ### Technical Details - Uses object-type-specific cleanup logic (file: vs cid: prefixes) - LRUMap with maxEntries (50) for memory management - Maintains backwards compatibility for direct File objects - Deferred cleanup timing to prevent image flickering One remaining issue: useEffect cleanup can't access "next" fileData value for different-CID detection. Need layoutEffect or different tracking approach. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
…assing
## Final Fix: useLayoutEffect for Proper Ref Timing
- Add useLayoutEffect to update nextFileDataRef before useEffect cleanup
- Enables detection of different CIDs during cleanup phase
- All 10/10 img-file tests now passing
## Complete Implementation Summary
### ✅ Stable CID-Based Content Keys
- DocFileMeta objects use `cid:${cid}` keys for stable content identity
- File objects use `file:${metadata}` keys for backwards compatibility
- Prevents unnecessary cleanup when same content, different object references
### ✅ Content-Aware Cleanup Logic
- File objects: Always cleanup on useEffect change (includes unmount)
- DocFileMeta objects: Only cleanup when CID actually changes
- Zero cleanup calls for same content, proper cleanup for different content
### ✅ Enhanced Cache Management
- LRUMap with maxEntries (50) for memory management
- Namespaced keys prevent collisions between File/DocFileMeta objects
- Updated cement dependency for improved cache controls
### ✅ Test Coverage & Requirements Met
- Same CID: 0 URL.revokeObjectURL calls ✅
- Different CID: 1 URL.revokeObjectURL call ✅
- Component unmount: Proper cleanup ✅
- Cross-type transitions: Working correctly ✅
Fully addresses CodeRabbit feedback from PR 1137 discussions r2334960605 and r2334960613.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <[email protected]>
- Update KeyBag constructor to use current KeyBag.create() API - Fix MemoryGateway calls to use current Gateway interface (not SerdeGateway) - Restore upsert method calls to current signature - Clean up imports to remove unused SerdeGateway types - Meta tests now pass, providing stable baseline for investigation 📝 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
Brought over the enhanced attachable-subscription test from the mabels/meta-corruption branch. This test includes valuable debugging infrastructure for investigating race conditions in meta synchronization: - CAR file accessibility validation after attach operations - Cross-database CAR file verification - Subscription firing behavior checks - Detection of stale CAR files that can't be loaded from storage The test currently fails with missing block errors when attempting allDocs() on remote databases after attachment. This reveals the actual meta corruption issue where the CRDT clock references blocks that aren't available in storage. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
7d1d038 to
d9ae4c8
Compare
WalkthroughThis PR primarily updates dependency versions across many packages, adjusts Dependabot rules, modifies a ledger event handler parameter in core/base/ledger.ts, refactors ImgFile caching/cleanup logic in use-fireproof/react/img-file.ts, and restructures/expands tests around attachable subscriptions and meta runtime setup. Adds investigative notes. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor U as React User
participant C as ImgFile Component
participant L as LRUMap Cache
participant B as Browser URL API
U->>C: Render/prop change (File or DocFileMeta)
activate C
C->>C: compute contentKey (file:/cid:)
C->>L: check(contentKey)
alt cache hit
L-->>C: objectURL
C->>C: reuse, no new revoke
else cache miss
C->>B: createObjectURL(blob)
B-->>C: objectURL
C->>L: set(contentKey, objectURL)
C->>C: store cleanup {contentKey, revoke}
end
note over C: On effect change/unmount<br/>- File: always revoke previous<br/>- DocFileMeta: revoke only if contentKey changed
deactivate C
sequenceDiagram
autonumber
participant LF as LedgerFactory
participant It as Item
participant LS as Ledgers
LF->>It: once(({ key }) => onClosed)
It-->>LF: emit closed with { key }
LF->>LS: unget(key)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).Please share your feedback with us on this Discord post. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
core/tests/fireproof/attachable-subscription.test.ts (1)
404-415: Broken assertions: using non‑Promise, undefined variables.
- setupSubscription object is not a Promise; subscriptionCounts/receivedDocs are undefined and will throw.
- const subscriptionPromise = setupSubscription(inbound); + const inboundSub = setupSubscription(inbound); @@ - await Promise.race([ - subscriptionPromise, + await Promise.race([ + inboundSub.first, new Promise((_, reject) => setTimeout(() => reject(new Error("Subscription timeout")), 5000)), ]); @@ - expect(subscriptionCounts.get("inbound-db")).toBeGreaterThan(0); - expect(subscriptionCounts.get("inbound-db")).toBeGreaterThanOrEqual(1); // Should fire at least once + expect(inboundSub.docs.length).toBeGreaterThan(0); @@ - const docs = receivedDocs.get("inbound-db") || []; - expect(docs.length).toBeGreaterThan(0); - expect(docs.length).toBe(ROWS * 2); // Should receive both inbound and outbound documents + const uniq = new Set(inboundSub.docs.map((d) => d._id)); + expect(uniq.size).toBeGreaterThanOrEqual(ROWS); // relaxed to avoid flake @@ - await inbound.close(); + inboundSub.unsub(); + await inbound.close();
🧹 Nitpick comments (13)
use-fireproof/react/img-file.ts (5)
147-151: Avoid SSR warning: use isomorphic layout effect.
useLayoutEffectfires a warning on the server. Use an isomorphic shim to map touseEffectduring SSR.-import React, { useState, useEffect, useLayoutEffect, useRef, useMemo, ImgHTMLAttributes } from "react"; +import React, { useState, useEffect, useLayoutEffect, useRef, useMemo } from "react"; +import type { ImgHTMLAttributes } from "react"; @@ -// Use layoutEffect to update tracking ref before useEffect cleanup runs -useLayoutEffect(() => { +// Use layout effect in browser; fall back to effect on server +const useIsomorphicLayoutEffect = typeof window !== "undefined" ? useLayoutEffect : useEffect; +// Update tracking ref before useEffect cleanup runs +useIsomorphicLayoutEffect(() => { nextFileDataRef.current = fileData; }, [fileData]);
1-3: Use type-only imports to avoid bundling types.Mark
DocFileMetaandImgHTMLAttributesas type imports to keep value imports clean.-import { DocFileMeta } from "@fireproof/core-types-base"; -import React, { useState, useEffect, useLayoutEffect, useRef, useMemo, ImgHTMLAttributes } from "react"; +import type { DocFileMeta } from "@fireproof/core-types-base"; +import React, { useState, useEffect, useLayoutEffect, useRef, useMemo } from "react"; +import type { ImgHTMLAttributes } from "react";
166-199: Double-check unmount behavior for DocFileMeta caching.Current logic keeps DocFileMeta URLs alive across unmounts (revoke only when CID changes). This is fine if the LRU revokes on eviction, otherwise URLs can linger. Consider a prop toggle (e.g.,
persistMetaAcrossUnmounts = true) and default to revoking on unmount in memory‑constrained contexts.If keeping across unmounts is intended, please confirm eviction disposal is guaranteed (see earlier comment). Otherwise, flipping the default to revoke on unmount for meta might be safer.
24-31: Type guard robustness.
isFilerelies onstream. Older environments may lackFile.prototype.stream. Preferobj instanceof Filewith a safe fallback, or feature-detect with"name" in obj && "lastModified" in obj.-function isFile(obj: FileType): obj is File { - return "type" in obj && "size" in obj && "stream" in obj && typeof obj.stream === "function"; -} +function isFile(obj: FileType): obj is File { + return (typeof File !== "undefined" && obj instanceof File) || + ("name" in obj && "lastModified" in obj && "size" in obj && "type" in obj); +}
41-45: Minor lint: avoid disabling restricted-globals for URL.Use
globalThis.URL(orwindow.URLin browsers) to satisfy the linter without disabling the rule.- // eslint-disable-next-line no-restricted-globals - objectUrlCache.set(cacheKey, URL.createObjectURL(fileObj)); + objectUrlCache.set(cacheKey, globalThis.URL.createObjectURL(fileObj));And similarly for
revokeObjectURL.core/types/protocols/cloud/package.json (1)
39-45: Multiformats version changed to ^13.4.0 (appears as a downgrade).If intentional (to dodge a regression), please add a brief note in the PR description or a changeset. Otherwise, consider standardizing on the latest compatible patch (e.g., ^13.4.1) across the monorepo.
Optionally revert to 13.4.1:
- "multiformats": "^13.4.0", + "multiformats": "^13.4.1",core/runtime/package.json (1)
39-49: Runtime deps: prefer a single multiformats across workspace; consider pinning during investigation.Given this PR is for corruption investigation, reducing variability helps. Either:
- Standardize to one caret version across all packages, or
- Temporarily pin exact multiformats to 13.4.0 for reproducibility, then relax later.
Example pin (temporary during investigation):
- "multiformats": "^13.4.0" + "multiformats": "13.4.0"core/blockstore/package.json (1)
39-56: Blockstore is a hot path; be cautious with multiformats changes.
- @adviser/cement ^0.4.39 — OK.
- multiformats moved to ^13.4.0 — ensure CAR read/write, CID, and hashing behavior remain identical to what tests expect. For the meta-corruption repro, consider pinning exactly to minimize moving parts.
Option A (pin for the investigation window):
- "multiformats": "^13.4.0", + "multiformats": "13.4.0",Option B (standardize upward if no regressions):
- "multiformats": "^13.4.0", + "multiformats": "^13.4.1",To validate quickly, run focused tests that touch CAR I/O and CRDT clock reads after a clean install to confirm identical block reachability semantics.
core/tests/fireproof/attachable-subscription.test.ts (5)
244-255: Reduce flakiness: avoid fixed sleeps; poll for condition.Consider a small waiter:
async function waitFor(predicate: () => Promise<boolean>, timeoutMs=5000, intervalMs=100) { const start = Date.now(); while (Date.now() - start < timeoutMs) { if (await predicate()) return; await sleep(intervalMs); } throw new Error("waitFor timeout"); }Then await carLog length or rows >= expected instead of sleep(1000).
270-272: Fix generic type for allDocs.- const res = await db.allDocs<string>(); + const res = await db.allDocs<{ value: string }>();
274-275: Uniq comparison avoids double‑counting subscription emissions.If docs can be delivered more than once, compare unique ids (already done). Consider asserting size rather than deep equality if order may vary across emissions.
314-315: Avoid global debug side‑effects; restore FP_DEBUG after test.- sthis.env.set("FP_DEBUG", "MemoryGatewayMeta"); + const prev = sthis.env.get("FP_DEBUG"); + sthis.env.set("FP_DEBUG", "MemoryGatewayMeta"); + try { + // ... assertions ... + } finally { + if (prev == null) sthis.env.unset?.("FP_DEBUG"); else sthis.env.set("FP_DEBUG", prev); + }
533-557: Error dump block is helpful; gate behind failure to keep CI logs clean.Wrap the verbose expect dump in
if (isError) { ... }(already present) and considerconsole.debuginstead ofconsole.logto reduce noise in passing runs.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (38)
.github/dependabot.yml(1 hunks)cli/package.json(2 hunks)cloud/3rd-party/package.json(1 hunks)cloud/backend/base/package.json(2 hunks)cloud/backend/cf-d1/package.json(2 hunks)cloud/backend/node/package.json(2 hunks)cloud/base/package.json(1 hunks)cloud/todo-app/package.json(1 hunks)core/base/ledger.ts(2 hunks)core/base/package.json(2 hunks)core/blockstore/package.json(2 hunks)core/core/package.json(1 hunks)core/device-id/package.json(1 hunks)core/gateways/base/package.json(1 hunks)core/gateways/cloud/package.json(1 hunks)core/gateways/file-deno/package.json(1 hunks)core/gateways/file-node/package.json(1 hunks)core/gateways/file/package.json(1 hunks)core/gateways/indexeddb/package.json(1 hunks)core/gateways/memory/package.json(1 hunks)core/keybag/package.json(1 hunks)core/protocols/cloud/package.json(1 hunks)core/protocols/dashboard/package.json(1 hunks)core/runtime/package.json(2 hunks)core/tests/fireproof/attachable-subscription.test.ts(6 hunks)core/tests/package.json(2 hunks)core/tests/runtime/meta-key-hack.test.ts(1 hunks)core/types/base/package.json(1 hunks)core/types/blockstore/package.json(1 hunks)core/types/protocols/cloud/package.json(1 hunks)core/types/runtime/package.json(1 hunks)dashboard/package.json(4 hunks)notes/meta-impl.md(1 hunks)notes/meta-testing.md(1 hunks)package.json(1 hunks)use-fireproof/package.json(1 hunks)use-fireproof/react/img-file.ts(5 hunks)vendor/package.json(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
core/tests/fireproof/attachable-subscription.test.ts (4)
core/types/base/types.ts (3)
Database(625-667)DocWithId(238-238)PARAM(42-68)core/base/ledger.ts (3)
name(158-161)fireproof(335-337)sthis(114-116)core/base/database.ts (2)
name(67-69)allDocs(140-162)core/runtime/utils.ts (1)
sleep(575-577)
core/tests/runtime/meta-key-hack.test.ts (2)
core/blockstore/attachable-store.ts (1)
ctx(62-64)core/types/blockstore/types.ts (1)
Loadable(668-700)
🪛 GitHub Actions: @fireproof/core
notes/meta-impl.md
[error] 1-1: Prettier formatting check failed. Run 'prettier --write' to fix code style issues.
notes/meta-testing.md
[error] 1-1: Prettier formatting check failed. Run 'prettier --write' to fix code style issues.
core/tests/fireproof/attachable-subscription.test.ts
[error] 1-1: Prettier formatting check failed. Run 'prettier --write' to fix code style issues.
🔇 Additional comments (39)
use-fireproof/react/img-file.ts (1)
77-86: Good call on namespaced content keys.Using
cid:for DocFileMeta andfile:for File objects prevents collisions and simplifies cleanup logic keyed by stable content identity.core/gateways/file-node/package.json (1)
39-39: OK; Node-only package.Bump is fine in the Node-targeted gateway.
package.json (1)
64-79: Incorrect — no workspace multiformats mismatch found.All package.json files list multiformats as ^13.4.0 (including root);
pnpm -w lsshows no @adviser/cement — the suggested override/bump to ^13.4.1 is unnecessary.Likely an incorrect or invalid review comment.
core/gateways/cloud/package.json (1)
39-39: Approve bump — single hoisted @adviser/[email protected] verified.Scanned package.json files and pnpm-lock.yaml: the lockfile only lists @adviser/[email protected] (plus a typescript variant) and older ranges in package.json (e.g., ^0.4.21, ^0.4.20) resolve to 0.4.39 — no transitive drift or duplicate-version bundling detected.
cloud/todo-app/package.json (1)
44-44: Bump looks fine but cannot verify — build failed during install (Playwright browser/FFMPEG download, ENOENT uv_cwd).pnpm dedupe/install ran (deprecated subdeps noted) but pnpm -w -r build failed while Playwright attempted browser/FFMPEG downloads; cannot confirm peer warnings.
- Action: Re-run the commands in a networked environment (or run with PLAYWRIGHT_SKIP_BROWSER_DOWNLOAD=1) and provide the output of:
pnpm -w why @adviser/cement; pnpm -w dedupe; pnpm -w install --lockfile-only; pnpm -w -r build 2>&1 | rg -n "peer.*warning|unmet"cloud/3rd-party/package.json (1)
42-45: No change required — react is satisfied by the workspace.
pnpm -w -F @fireproof/cloud-3rd-party why react shows react 19.1.1 is provided (react-dom peer + use-fireproof), so adding react to cloud/3rd-party/package.json is unnecessary.core/gateways/indexeddb/package.json (1)
39-39: Browser target: build failed — cannot verify @adviser/cement doesn't pull Node built-insBuild aborted with EACCES writing temp files during core-cli tsc, so no dist artifacts were produced and the Node-core scan couldn't run. Re-run the provided check in CI or locally after fixing permissions, or supply the built/dist artifacts for scanning. Location: core/gateways/indexeddb/package.json (dependency "@adviser/cement": "^0.4.39").
core/tests/package.json (1)
43-65: Align test env with Node types bump.Upgrading @adviser/cement and @types/node to ^24.4.0 is fine; verify Vitest/Playwright types resolve cleanly under TS 5.8.x and Node ≥22 used in CI.
Run:
vendor/package.json (1)
32-32: Consider moving cement to devDependencies if only used at build-time.If vendor doesn’t require @adviser/cement at runtime, shift it to devDependencies to trim consumers’ dependency trees.
Find imports:
If only build scripts reference it, move it under devDependencies.
core/gateways/file-deno/package.json (2)
39-39: Bump to @adviser/cement ^0.4.39 looks good.Consistent with the repo-wide upgrade. No API risk expected.
42-44: Move type packages to devDependencies (avoid shipping types at runtime).Both @types/deno and @types/node are compile-time only and should not be installed in production environments.
Apply this diff:
"dependencies": { "@adviser/cement": "^0.4.39", "@fireproof/core-types-base": "workspace:0.0.0", "@fireproof/vendor": "workspace:0.0.0", - "@types/deno": "^2.3.0", - "@types/node": "^24.4.0" + }, + "devDependencies": { + "@types/deno": "^2.3.0", + "@types/node": "^24.4.0" }Run to check for other packages shipping @types/* in dependencies:
core/base/package.json (2)
39-39: Cement bump LGTM.
54-54: @types/node as devDependency is correct.No action needed.
cli/package.json (2)
42-42: Cement bump LGTM.
51-51: multiformats ranges aligned — no 12.x pins foundAll package.json files reference "multiformats": "^13.4.0": package.json, core/types/runtime/package.json, core/types/protocols/cloud/package.json, core/types/blockstore/package.json, core/types/base/package.json, core/runtime/package.json, core/keybag/package.json, core/device-id/package.json, core/blockstore/package.json, cli/package.json, cloud/backend/cf-d1/package.json, dashboard/package.json.
core/types/base/package.json (2)
39-39: Cement bump LGTM.
44-44: multiformats range normalized.Looks fine; matches cli.
core/protocols/cloud/package.json (1)
39-39: Cement bump LGTM..github/dependabot.yml (1)
17-17: Incorrect — suggested Dependabot change doesn't applyThe repo's .github/dependabot.yml uses ignore entries with concrete version constraints (e.g.,
versions: [">3.4.17"]);versionsis valid for ignore rules.update-typesis a different field for update filters and does not apply here, so the proposed diff is incorrect and no change is needed.Likely an incorrect or invalid review comment.
core/gateways/base/package.json (1)
39-39: Hold — inconsistent @adviser/cement versions found; verify before approvingMost packages use ^0.4.39; exceptions found:
- examples/react-router/package.json — @adviser/cement: ^0.4.21
- use-fireproof/package.json — peerDependencies: @adviser/cement: ^0.4.20 (dependencies: ^0.4.39)
Confirm these older pins are intentional (compatibility/examples) or update them to ^0.4.39.
use-fireproof/package.json (1)
25-42: Resolved — peer dependency for @adviser/cement matches dependency (no action required)
Ran a repository-wide check; no dependency/peerDependency mismatches were found (NO_MISMATCHES_FOUND).core/core/package.json (1)
42-46: Deps bump looks fine; confirm compatibility surface.
- @adviser/cement: ^0.4.39 — ensure no breaking changes relevant to our usage.
- @types/node: ^24.4.0 — verify it matches our supported Node runtimes and TS lib settings.
core/gateways/file/package.json (1)
41-45: LGTM; align dev/runtime type targets.
- Dev-only @types/node ^24.4.0 is fine; confirm tsconfig/lib and Node engine targets are consistent with other packages.
core/protocols/dashboard/package.json (1)
39-39: Cement bump acknowledged.No concerns here; just ensure monorepo-wide version coherence to avoid multiple copies during hoisting.
cloud/base/package.json (1)
41-41: Cement bump OK.No further action beyond ensuring lockfile regen and CI cache bust.
cloud/backend/base/package.json (1)
39-50: LGTM - Clean dependency updates.All dependency version bumps are consistent with the broader PR pattern and follow semantic versioning appropriately.
dashboard/package.json (1)
28-84: Notable Tailwind CSS downgrade - verify intentionality.Most dependencies follow expected patch/minor updates, but there's a significant version change for Tailwind CSS from
^4.1.13to^3.4.17(major version downgrade). This could indicate either a rollback due to compatibility issues or an error in the update process.Based on the search results, the Tailwind CSS downgrade from v4.1.13 to v3.4.17 is a reasonable rollback decision due to v4's compatibility issues with older browsers and existing tooling, especially with Next.js and Turbopack builds. The official documentation recommends sticking with v3.4 for projects that need to support older browsers.
core/types/runtime/package.json (1)
39-41: LGTM - Standard dependency updates.These dependency version updates align with the broader PR pattern and are appropriate patch/minor updates.
core/keybag/package.json (1)
39-46: LGTM - Consistent dependency updates.The dependency updates follow the same pattern as other packages in this PR and maintain version consistency across the codebase.
core/tests/runtime/meta-key-hack.test.ts (2)
16-16: LGTM - Proper variable scoping fix.Moving the
ctxdeclaration to the outer scope resolves any potential scoping issues and makes the variable accessible throughout the test lifecycle.
51-51: Updated event handler signature aligns with ledger changes.The callback parameter change from
(key)to({ key })is consistent with the destructuring pattern introduced incore/base/ledger.tsLine 51. This maintains API consistency across the codebase.cloud/backend/cf-d1/package.json (1)
42-61: LGTM - Consistent cloud backend updates.All dependency updates follow the established pattern from other packages in this PR, maintaining version consistency across cloud backend components.
core/base/ledger.ts (2)
73-73: LGTM - Simplified cleanup call.The removal of the
.toString()cast is appropriate since the key is already being passed directly from the destructured parameter.
51-51: No breaking-wide change found — only ledger callsite uses destructured { key }.
- Repo scan: 70
.once(usages; only one call uses({ key })— core/base/ledger.ts:51. Other.get(...).once()callsites either pass no params or pass a factory/function reference.- Action: If KeyedResolvOnce callback signature changed (from
keyto{ key }), this edit is local to core/base/ledger.ts; other callers don't appear to rely on a scalarkeyargument. Verify runtime behavior of any named function references passed directly (e.g., cloud/gateway.ts:648) if they expect a scalar param.core/types/blockstore/package.json (1)
39-44: LGTM - Standard dependency maintenance.The dependency updates are consistent with the overall PR pattern and represent appropriate version bumps.
notes/meta-impl.md (1)
1-1: Resolved — Prettier fixed the mismatched Markdown fence in notes/meta-impl.md
npx prettier --write notes/meta-impl.md ran and formatted the file; no further action required.notes/meta-testing.md (1)
1-61: Prettier run — formatting fixed; align docs with code
- notes/meta-testing.md was formatted (npx prettier --write notes/meta-testing.md).
- Ensure doc statements/examples about KeyBag constructor and envelope vs Uint8Array match the actual implementation in core/keybag/key-bag.ts and core/gateways/memory/gateway.ts; update or remove mismatches.
core/device-id/package.json (1)
39-39: Multiformats version appears downgraded — confirm repo-wide consistency
- core/device-id/package.json shows "@adviser/cement": "^0.4.39"; original report indicates "multiformats": "^13.4.0", which may conflict with packages using ^13.4.1+ (risk of hoist drift).
- Verification couldn't be completed: ripgrep returned "No files were searched". From the repo root run:
rg -n --hidden --no-ignore '"multiformats":' -g '!node_modules' -g '!/dist/' -Scloud/backend/node/package.json (1)
39-39: Deps bump verified; all cloud/backend packages consistently use @adviser/cement ^0.4.39, Hono ^4.9.8 and @types/node ^24.4.0.
| "@types/node": "^24.4.0" | ||
| }, | ||
| "dependencies": { | ||
| "@adviser/cement": "^0.4.35", | ||
| "@adviser/cement": "^0.4.39", | ||
| "@fireproof/core-gateways-base": "workspace:0.0.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Keep versions consistent across the workspace; validate no drift.
- @types/node and @adviser/cement bumps look good here; ensure all packages align to avoid duplicate installs/DI issues in tests.
Run this repo-wide check to spot inconsistencies for @adviser/cement, multiformats, and @types/node:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
files=($(fd -H -t f package.json))
check_pkg () {
local pkg="$1"
echo "== $pkg =="
awk 'BEGIN{print "version\tfile"}'
for f in "${files[@]}"; do
v=$(jq -r --arg p "$pkg" '(.dependencies[$p] // .devDependencies[$p] // empty)' "$f")
if [[ -n "${v:-}" ]]; then
printf "%s\t%s\n" "$v" "$f"
fi
done | sort -u
echo "Distinct versions:"
for f in "${files[@]}"; do
jq -r --arg p "$pkg" '(.dependencies[$p] // .devDependencies[$p] // empty)' "$f"
done | sed '/^$/d' | sort -u | nl -ba
echo
}
check_pkg "@adviser/cement"
check_pkg "multiformats"
check_pkg "@types/node"Length of output: 1204
🏁 Script executed:
#!/usr/bin/env bash
set -euo pipefail
pkgs=( "@adviser/cement" "multiformats" "@types/node" )
for pkg in "${pkgs[@]}"; do
echo "== $pkg =="
printf "%s\t%s\n" "version" "file"
rg -n --hidden --no-ignore -g '!**/node_modules/**' --color=never "\"$pkg\"\\s*:\\s*\"[^\"]+\"" || true \
| sed -E "s#([^:]+):[0-9]+:.*\"$pkg\"\\s*:\\s*\"([^\"]+)\".*#\\2\t\\1#" \
| sort -u
echo
doneLength of output: 547
Align @adviser/cement versions across the workspace
- examples/react-router/package.json uses ^0.4.21 (outlier); update it to ^0.4.39 to match the rest of the workspace and avoid hoisting/duplicate-install DI issues.
- multiformats and @types/node checks were inconclusive due to parse/search errors — re-run the repo-wide check for those packages.
🤖 Prompt for AI Agents
In core/gateways/memory/package.json around lines 41 to 45, update the outlier
in examples/react-router/package.json by changing the @adviser/cement dependency
from ^0.4.21 to ^0.4.39 so it matches the rest of the workspace to avoid
hoisting/duplicate-install DI issues, then re-run the repo-wide dependency check
specifically for multiformats and @types/node to confirm there are no other
mismatched versions or parse/search errors and fix any discrepancies found.
| function setupSubscription(db: Database) { | ||
| const docs: DocWithId<string>[] = []; | ||
| return { | ||
| docs, | ||
| unsub: db.subscribe<string>((sdocs) => { | ||
| docs.push(...sdocs); | ||
| }, true), | ||
| }; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Type fix and make subscription awaitable (current code returns non‑Promise).
- DocWithId is invalid; use an object type that matches test docs.
- Returning a plain object then using it in Promise.race never waits.
Apply:
- function setupSubscription(db: Database) {
- const docs: DocWithId<string>[] = [];
- return {
- docs,
- unsub: db.subscribe<string>((sdocs) => {
- docs.push(...sdocs);
- }, true),
- };
- }
+ function setupSubscription(db: Database) {
+ const docs: DocWithId<{ value: string }>[] = [];
+ let resolveFirst: (() => void) | undefined;
+ const first = new Promise<void>((res) => { resolveFirst = res; });
+ const unsub = db.subscribe<{ value: string }>((sdocs) => {
+ docs.push(...sdocs);
+ if (resolveFirst) { resolveFirst(); resolveFirst = undefined; }
+ }, true);
+ return { docs, unsub, first };
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function setupSubscription(db: Database) { | |
| const docs: DocWithId<string>[] = []; | |
| return { | |
| docs, | |
| unsub: db.subscribe<string>((sdocs) => { | |
| docs.push(...sdocs); | |
| }, true), | |
| }; | |
| } | |
| function setupSubscription(db: Database) { | |
| const docs: DocWithId<{ value: string }>[] = []; | |
| let resolveFirst: (() => void) | undefined; | |
| const first = new Promise<void>((res) => { resolveFirst = res; }); | |
| const unsub = db.subscribe<{ value: string }>((sdocs) => { | |
| docs.push(...sdocs); | |
| if (resolveFirst) { resolveFirst(); resolveFirst = undefined; } | |
| }, true); | |
| return { docs, unsub, first }; | |
| } |
🤖 Prompt for AI Agents
In core/tests/fireproof/attachable-subscription.test.ts around lines 159 to 167,
the helper uses an incorrect DocWithId<string> type and returns a plain object
whose unsub is not awaitable; change the docs array type to match the actual
test documents (e.g., the exact object shape used in the test such as { id:
string; /* other fields */ } rather than DocWithId<string>) and make unsub an
awaitable Promise (or return both the unsubscribe function and a Promise that
resolves when the subscription is torn down) so callers using Promise.race can
await completion; ensure the subscribe callback still pushes incoming docs into
the typed docs array and resolve the promise when the unsubscribe logic runs.
| let dbContent: DocWithId<string>[]; | ||
| let joinableDBs: { | ||
| name: string; | ||
| content: DocWithId<string>[]; | ||
| }[] = []; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix generic types for collected docs.
- let dbContent: DocWithId<string>[];
+ let dbContent: DocWithId<{ value: string }>[];
- let joinableDBs: {
+ let joinableDBs: {
name: string;
- content: DocWithId<string>[];
+ content: DocWithId<{ value: string }>[];
}[] = [];📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let dbContent: DocWithId<string>[]; | |
| let joinableDBs: { | |
| name: string; | |
| content: DocWithId<string>[]; | |
| }[] = []; | |
| let dbContent: DocWithId<{ value: string }>[]; | |
| let joinableDBs: { | |
| name: string; | |
| content: DocWithId<{ value: string }>[]; | |
| }[] = []; |
🤖 Prompt for AI Agents
In core/tests/fireproof/attachable-subscription.test.ts around lines 171–176,
the collected-doc variables use DocWithId<string> which is too narrow; change
the generics to the actual document type used in the test (e.g.,
DocWithId<TestDoc> for dbContent and content: DocWithId<TestDoc>[] inside
joinableDBs) so the types align with the created documents (or, if the test
defines no TestDoc type, use DocWithId<Record<string, unknown>>). Ensure
imports/types are updated to match.
| async function writeRows(db: Database): Promise<DocWithId<string>[]> { | ||
| const ret: DocWithId<string>[] = []; | ||
| for (let j = 0; j < ROWS; j++) { | ||
| await db.put({ _id: `${db.name}-${j}`, value: `${db.name}-${j}` }); | ||
| ret.push(await db.get(`${db.name}-${j}`)); | ||
| } | ||
| return ret; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Align writeRows return type with stored doc shape.
- async function writeRows(db: Database): Promise<DocWithId<string>[]> {
- const ret: DocWithId<string>[] = [];
+ async function writeRows(db: Database): Promise<DocWithId<{ value: string }>[]> {
+ const ret: DocWithId<{ value: string }>[] = [];
for (let j = 0; j < ROWS; j++) {
await db.put({ _id: `${db.name}-${j}`, value: `${db.name}-${j}` });
ret.push(await db.get(`${db.name}-${j}`));
}
return ret;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async function writeRows(db: Database): Promise<DocWithId<string>[]> { | |
| const ret: DocWithId<string>[] = []; | |
| for (let j = 0; j < ROWS; j++) { | |
| await db.put({ _id: `${db.name}-${j}`, value: `${db.name}-${j}` }); | |
| ret.push(await db.get(`${db.name}-${j}`)); | |
| } | |
| return ret; | |
| } | |
| async function writeRows(db: Database): Promise<DocWithId<{ value: string }>[]> { | |
| const ret: DocWithId<{ value: string }>[] = []; | |
| for (let j = 0; j < ROWS; j++) { | |
| await db.put({ _id: `${db.name}-${j}`, value: `${db.name}-${j}` }); | |
| ret.push(await db.get(`${db.name}-${j}`)); | |
| } | |
| return ret; | |
| } |
🤖 Prompt for AI Agents
In core/tests/fireproof/attachable-subscription.test.ts around lines 177 to 184,
the writeRows function's declared return type DocWithId<string>[] doesn't match
the shape of the stored documents (which have a value field). Update the
function signature to return DocWithId<{ value: string }>[] and ensure any local
variables/assignments respect that shape so the returned items reflect objects
with _id and value properties.
| it("should trigger subscriptions on inbound syncing", { timeout: 30000 }, async () => { | ||
| // Setup subscription on main database before attaching remote databases | ||
| const subscriptionPromise = setupSubscription(db, "main-db"); | ||
| const dbSub = setupSubscription(db); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use the awaitable from setupSubscription; current Promise.race resolves immediately.
- const dbSub = setupSubscription(db);
+ const dbSub = setupSubscription(db);No code here changes, but downstream assertions must use dbSub.first where you intend to await the first event. See fix below in the “sync” test.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In core/tests/fireproof/attachable-subscription.test.ts around lines 218 to 221,
the test calls setupSubscription(db) but doesn't use the awaitable it returns,
so Promise.race resolves immediately; update the test to use the awaitable
returned by setupSubscription (e.g., use dbSub.first) wherever you intend to
wait for the first subscription event (replace immediate Promise.race usage or
other awaits with awaiting dbSub.first) to ensure downstream assertions run
after the subscription event.
| const objectUrlCache = new LRUMap<string, string>({ | ||
| maxEntries: 50, // Limit to 50 cached object URLs to manage memory | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Prevent object URL leaks on LRU eviction (missing disposal).
Entries removed by the LRU can leave object URLs unreclaimed since revoke only happens on explicit cleanup paths. Add an eviction/dispose hook to revoke URLs when the cache evicts a key, or swap to an LRU with a disposal callback.
Option A (if LRUMap supports an eviction hook):
-const objectUrlCache = new LRUMap<string, string>({
- maxEntries: 50, // Limit to 50 cached object URLs to manage memory
-});
+const objectUrlCache = new LRUMap<string, string>({
+ maxEntries: 50,
+ // Ensure object URLs are revoked when an entry is evicted
+ // NOTE: verify the exact option name/signature for this LRUMap.
+ onEvict: (_key: string, url: string) => {
+ // eslint-disable-next-line no-restricted-globals
+ URL.revokeObjectURL(url);
+ },
+});If the library lacks onEvict/dispose, consider wrapping it or replacing with an LRU that supports disposal (e.g., lru-cache) so evicted URLs are revoked automatically.
To confirm availability of an eviction callback:
🌐 Web query:
Does the @adviser/cement LRUMap support an eviction/dispose callback (e.g., onEvict or dispose) to clean up values when entries are evicted?
💡 Result:
Short answer: Yes — the LRUMap implementation used in the ecosystem supports eviction callbacks (onEvict / onStale / onRemove). You can register an onEvict (or onRemove) handler to clean up values when entries are evicted. [1][2]
Sources:
- lru-map README (describes new LRUMap options including onEvict, onStale and onRemove). [1]
- @adviser/cement package listing (package in the same ecosystem; see package info). [2]
If you want, tell me the exact version of @adviser/cement you’re using and I’ll fetch its bundle/source to confirm how it exposes LRUMap and show an example.
Revoke object URLs on LRU eviction
Add an eviction callback to revoke object URLs when entries are evicted to prevent memory leaks.
File: use-fireproof/react/img-file.ts (lines 7–9)
-const objectUrlCache = new LRUMap<string, string>({
- maxEntries: 50, // Limit to 50 cached object URLs to manage memory
-});
+const objectUrlCache = new LRUMap<string, string>({
+ maxEntries: 50,
+ // Revoke object URLs when entries are evicted/removed
+ onEvict: (_key: string, url: string) => {
+ // eslint-disable-next-line no-restricted-globals
+ URL.revokeObjectURL(url);
+ },
+});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const objectUrlCache = new LRUMap<string, string>({ | |
| maxEntries: 50, // Limit to 50 cached object URLs to manage memory | |
| }); | |
| const objectUrlCache = new LRUMap<string, string>({ | |
| maxEntries: 50, | |
| // Revoke object URLs when entries are evicted/removed | |
| onEvict: (_key: string, url: string) => { | |
| // eslint-disable-next-line no-restricted-globals | |
| URL.revokeObjectURL(url); | |
| }, | |
| }); |
🤖 Prompt for AI Agents
In use-fireproof/react/img-file.ts around lines 7 to 9, the LRUMap currently
caches object URLs but never revokes them, risking memory leaks; add an eviction
callback (e.g., via the LRUMap constructor option like onEvict/onEviction) that
calls URL.revokeObjectURL(evictedValue) for each evicted entry, and also ensure
any code path that overwrites or deletes entries revokes the previous object URL
(and revoke all on clear/unmount) so URLs are properly released when removed
from the cache.
| // Handle same content key - reuse existing cached URL if available | ||
| if (canReuseCache && newKey) { | ||
| const src = objectUrlCache.get(newKey) as string; | ||
| setImgDataUrl(src); | ||
| fileObjRef.current = fileObj; | ||
| keyRef.current = newKey; | ||
| // Keep existing cleanup function - don't create a new one or call prevCleanup | ||
| return cleanupRef.current; | ||
| } | ||
|
|
||
| // Return existing cleanup if same file | ||
| // Return existing cleanup if same file and no cached URL | ||
| return cleanupRef.current; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: same content key but missing cache returns stale cleanup and never re-creates URL.
If the key matches but the cache entry was evicted, we should recreate the object URL instead of returning the old cleanup. Otherwise the src won’t get updated.
Apply:
- // Return existing cleanup if same file and no cached URL
- return cleanupRef.current;
+ // Same content key but cache entry missing -> recreate URL and cleanup
+ if (newKey) {
+ const src = getObjectUrlByKey(newKey, fileObj);
+ setImgDataUrl(src);
+ fileObjRef.current = fileObj;
+ keyRef.current = newKey;
+ const newCleanupObj = {
+ contentKey: newKey,
+ revoke: () => {
+ if (objectUrlCache.has(newKey)) {
+ // eslint-disable-next-line no-restricted-globals
+ URL.revokeObjectURL(objectUrlCache.get(newKey) as string);
+ objectUrlCache.delete(newKey);
+ }
+ },
+ };
+ const prevCleanup = cleanupRef.current;
+ if (prevCleanup?.revoke) prevCleanup.revoke();
+ return newCleanupObj;
+ }
+ return cleanupRef.current;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Handle same content key - reuse existing cached URL if available | |
| if (canReuseCache && newKey) { | |
| const src = objectUrlCache.get(newKey) as string; | |
| setImgDataUrl(src); | |
| fileObjRef.current = fileObj; | |
| keyRef.current = newKey; | |
| // Keep existing cleanup function - don't create a new one or call prevCleanup | |
| return cleanupRef.current; | |
| } | |
| // Return existing cleanup if same file | |
| // Return existing cleanup if same file and no cached URL | |
| return cleanupRef.current; | |
| } | |
| // Handle same content key - reuse existing cached URL if available | |
| if (canReuseCache && newKey) { | |
| const src = objectUrlCache.get(newKey) as string; | |
| setImgDataUrl(src); | |
| fileObjRef.current = fileObj; | |
| keyRef.current = newKey; | |
| // Keep existing cleanup function - don't create a new one or call prevCleanup | |
| return cleanupRef.current; | |
| } | |
| // Same content key but cache entry missing -> recreate URL and cleanup | |
| if (newKey) { | |
| const src = getObjectUrlByKey(newKey, fileObj); | |
| setImgDataUrl(src); | |
| fileObjRef.current = fileObj; | |
| keyRef.current = newKey; | |
| const newCleanupObj = { | |
| contentKey: newKey, | |
| revoke: () => { | |
| if (objectUrlCache.has(newKey)) { | |
| // eslint-disable-next-line no-restricted-globals | |
| URL.revokeObjectURL(objectUrlCache.get(newKey) as string); | |
| objectUrlCache.delete(newKey); | |
| } | |
| }, | |
| }; | |
| const prevCleanup = cleanupRef.current; | |
| if (prevCleanup?.revoke) prevCleanup.revoke(); | |
| return newCleanupObj; | |
| } | |
| return cleanupRef.current; | |
| } |
…tion - Fixed undefined subscriptionCounts and receivedDocs variables in offline sync test - Added proper subscription tracking infrastructure for sync tests - Cleaned up ESLint issues and type errors - Offline sync test now passes, allowing focus on the main corruption issue The main "should trigger subscriptions on inbound syncing" test continues to fail with the actual meta corruption problem: - Stale CAR files (missing car file errors) - Missing blocks when CRDT clock references unavailable storage - Race conditions in attachment process where carLog shows 0 length after attach This provides a stable baseline for investigating the root cause of meta synchronization issues in distributed database scenarios. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
Summary
Problem Investigation
This PR adds test infrastructure to investigate the meta corruption issue where CRDT clocks reference blocks that aren't available in storage. The enhanced test currently fails with missing block errors, revealing the actual corruption problem rather than API compatibility issues.
Key Changes
Test Results
The test now shows the real corruption issue:
allDocs()operations failing because blocks referenced in CRDT clock are missingThis provides a stable baseline for investigating the root cause of meta synchronization race conditions.
🤖 Generated with Claude Code
Summary by CodeRabbit