Skip to content

Conversation

@tangledbytes
Copy link
Member

@tangledbytes tangledbytes commented Nov 12, 2025

Describe the Problem

This PR replaces GLACIER_DA with DEEP_ARCHIVE and changes the way we override the storage class without overwriting the storage class HTTP header.

Issues: Fixed #xxx / Gap #xxx

Testing Instructions:

  • Doc added/updated
  • Tests added

Summary by CodeRabbit

  • Bug Fixes

    • Broadened Glacier handling so restore/read flows consider multiple Glacier-related storage classes.
    • Removed the pre-redirect automatic storage-class override step.
  • Refactor

    • Renamed storage class option from GLACIER_DA to DEEP_ARCHIVE and exposed a grouped Glacier storage-classes list.
  • SDK

    • Public type updated: DEEP_ARCHIVE replaces GLACIER_DA as a valid storage-class option.
  • Chores

    • Config entry for NSFS_GLACIER_FORCE_STORAGE_CLASS changed from boolean to optional storage-class selector.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Nov 12, 2025

Walkthrough

Removed pre-redirect storage-class override call and its implementation; renamed STORAGE_CLASS_GLACIER_DA → STORAGE_CLASS_DEEP_ARCHIVE; introduced GLACIER_STORAGE_CLASSES; updated parsing, runtime checks, and SDK type to include DEEP_ARCHIVE and treat multiple glacier classes as glacier.

Changes

Cohort / File(s) Change Summary
S3 request flow
src/endpoint/s3/s3_rest.js
Removed call to s3_utils.override_storage_class(req) from handle_request — request flow now proceeds directly to populate/redirect logic.
S3 utils: rename & API surface
src/endpoint/s3/s3_utils.js
Renamed STORAGE_CLASS_GLACIER_DASTORAGE_CLASS_DEEP_ARCHIVE; removed override_storage_class function and its export; added GLACIER_STORAGE_CLASSES export and STORAGE_CLASS_DEEP_ARCHIVE export; adjusted parse and mapping logic and exports.
SDK types
src/sdk/nb.d.ts
Replaced GLACIER_DA with DEEP_ARCHIVE in StorageClass type definition.
Glacier/restore logic
src/sdk/glacier.js, src/endpoint/s3/ops/s3_get_object.js, src/sdk/namespace_fs.js
Replaced equality checks against a single glacier constant with membership checks using GLACIER_STORAGE_CLASSES.includes(...); adjusted restore/expiry/support logic and storage-class derivation accordingly.
Config
config.js
Changed NSFS_GLACIER_FORCE_STORAGE_CLASS JSDoc/type from boolean to nb.StorageClass and default from false to undefined.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Client
  participant S3_REST as s3_rest.handle_request
  participant Pop as populate_request_additional_info_or_redirect

  rect `#E8F3FF`
    Client->>S3_REST: HTTP request
    note right of S3_REST: Old flow called s3_utils.override_storage_class(req) — removed
  end

  rect `#E8FFE8`
    S3_REST->>Pop: populate_request_additional_info_or_redirect(req)
    Pop-->>S3_REST: proceed / redirect
    S3_REST->>Client: response
  end
Loading
sequenceDiagram
  autonumber
  participant Caller
  participant GlacierLogic as Glacier-related logic

  note over GlacierLogic: Old: storage_class === STORAGE_CLASS_GLACIER\nNew: GLACIER_STORAGE_CLASSES.includes(storage_class)
  Caller->>GlacierLogic: check storage_class
  GlacierLogic-->>Caller: treat as glacier if included
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Areas to focus on:
    • src/endpoint/s3/s3_utils.js exports and any callers relying on removed override_storage_class.
    • src/sdk/nb.d.ts — ensure consumers accept DEEP_ARCHIVE.
    • GLACIER_STORAGE_CLASSES usage sites (src/sdk/glacier.js, src/endpoint/s3/ops/s3_get_object.js, src/sdk/namespace_fs.js) for consistent restore/expiry behavior.

Possibly related PRs

Suggested reviewers

  • guymguym
  • dannyzaken
  • jackyalbo
  • aayushchouhan09

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 66.67% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly and accurately describes the main change: replacing GLACIER_DA with DEEP_ARCHIVE, which is the primary objective reflected throughout the changeset modifications.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
src/endpoint/s3/s3_utils.js (1)

24-24: Consider renaming the constant to match its new value.

The constant STORAGE_CLASS_GLACIER_DA now holds the value 'DEEP_ARCHIVE', creating a semantic mismatch. While this maintains export compatibility, it may confuse developers.

Consider this refactor to improve clarity:

-const STORAGE_CLASS_GLACIER_DA = 'DEEP_ARCHIVE'; // "S3 Deep Archive Storage Class"
+const STORAGE_CLASS_DEEP_ARCHIVE = 'DEEP_ARCHIVE'; // "S3 Deep Archive Storage Class"

And update the export on line 832:

-exports.STORAGE_CLASS_GLACIER_DA = STORAGE_CLASS_GLACIER_DA;
+exports.STORAGE_CLASS_DEEP_ARCHIVE = STORAGE_CLASS_DEEP_ARCHIVE;

Alternatively, if maintaining the old export name is critical for API compatibility, you could keep both:

+const STORAGE_CLASS_DEEP_ARCHIVE = 'DEEP_ARCHIVE'; // "S3 Deep Archive Storage Class"
-const STORAGE_CLASS_GLACIER_DA = 'DEEP_ARCHIVE'; // "S3 Deep Archive Storage Class"
+const STORAGE_CLASS_GLACIER_DA = STORAGE_CLASS_DEEP_ARCHIVE; // Deprecated, use STORAGE_CLASS_DEEP_ARCHIVE
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 36c21ce and d58d998.

📒 Files selected for processing (3)
  • src/endpoint/s3/s3_rest.js (0 hunks)
  • src/endpoint/s3/s3_utils.js (2 hunks)
  • src/sdk/nb.d.ts (1 hunks)
💤 Files with no reviewable changes (1)
  • src/endpoint/s3/s3_rest.js
🧰 Additional context used
🧬 Code graph analysis (1)
src/endpoint/s3/s3_utils.js (3)
src/endpoint/s3/ops/s3_put_object.js (1)
  • storage_class (23-23)
src/endpoint/s3/ops/s3_post_object_uploads.js (1)
  • storage_class (16-16)
config.js (1)
  • config (7-7)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Build Noobaa Image
  • GitHub Check: run-package-lock-validation
  • GitHub Check: run-jest-unit-tests
🔇 Additional comments (3)
src/sdk/nb.d.ts (1)

20-20: Type definition updated to reflect DEEP_ARCHIVE storage class.

The change aligns with the runtime constant value update in s3_utils.js (line 24). TypeScript will now correctly validate 'DEEP_ARCHIVE' as a valid StorageClass value.

src/endpoint/s3/s3_utils.js (2)

384-387: Inline override logic correctly implements the new mechanism.

The override logic has been successfully moved inline from the removed override_storage_class function. When NSFS_GLACIER_FORCE_STORAGE_CLASS is enabled and storage class is missing or STANDARD, the function now returns GLACIER, achieving the PR objective of not overwriting the storage class HTTP header (by handling it during parsing instead).


383-393: ****

The review comment incorrectly assumes a breaking change that does not exist. The storage_class_enum schema only includes ['STANDARD', 'GLACIER', 'GLACIER_IR', 'Glacier', 'DEEP_ARCHIVE', 'INTELLIGENT_TIERING', 'ONEZONE_IA', 'STANDARD_IA']—'GLACIER_DA' was never a valid stored value. STORAGE_CLASS_GLACIER_DA has always been defined as 'DEEP_ARCHIVE', and no 'GLACIER_DA' string literals exist in the codebase. The parse_storage_class function correctly validates inputs against the defined constants and rejects unknown values. No data migration or backward compatibility handling is required.

Likely an incorrect or invalid review comment.

@tangledbytes tangledbytes force-pushed the utkarsh/fix/glacier_da-storage-class branch from d58d998 to 65c6c42 Compare November 13, 2025 07:36
@pull-request-size pull-request-size bot added size/M and removed size/S labels Nov 13, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
src/endpoint/s3/ops/s3_get_object.js (1)

44-50: Broadened Glacier gate via GLACIER_STORAGE_CLASSES.

Good change. Minor hardening: normalize via parse_storage_class to respect force flags uniformly.

-    if (s3_utils.GLACIER_STORAGE_CLASSES.includes(object_md.storage_class)) {
+    if (s3_utils.GLACIER_STORAGE_CLASSES.includes(
+        s3_utils.parse_storage_class(object_md.storage_class)
+    )) {
src/sdk/glacier.js (1)

333-335: Gate now covers all Glacier-like classes; fix stale comment.

Logic LGTM. Please update the note to “Returns undefined if storage_class is not in GLACIER_STORAGE_CLASSES.”

- * NOTE: Returns undefined if `user.storage_class` attribute is not
- * `GLACIER`
+ * NOTE: Returns undefined if storage_class is not one of GLACIER_STORAGE_CLASSES

Also applies to: 342-345

src/sdk/namespace_fs.js (1)

1366-1369: Auto-enqueue migrate WAL when Glacier-like class is uploaded.

Looks right. Consider guarding with NSFS_GLACIER_LOGS_ENABLED check here (optional; currently handled inside append_to_migrate_wal).

src/endpoint/s3/s3_utils.js (2)

24-25: Introduce DEEP_ARCHIVE and GLACIER_STORAGE_CLASSES.

Good centralization. Consider Object.freeze(GLACIER_STORAGE_CLASSES) to prevent accidental mutation.

-const GLACIER_STORAGE_CLASSES = [
+const GLACIER_STORAGE_CLASSES = Object.freeze([
     STORAGE_CLASS_GLACIER,
     STORAGE_CLASS_DEEP_ARCHIVE,
-];
+]);

Also applies to: 47-52


389-397: parse_storage_class: normalize input for robustness.

Optional: trim/uppercase header before comparisons to avoid case/space pitfalls.

-function parse_storage_class(storage_class) {
+function parse_storage_class(storage_class) {
+    if (typeof storage_class === 'string') storage_class = storage_class.trim();
+    // AWS sends uppercase; normalizing defensively
+    if (typeof storage_class === 'string') storage_class = storage_class.toUpperCase();
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d58d998 and 65c6c42.

📒 Files selected for processing (6)
  • src/endpoint/s3/ops/s3_get_object.js (1 hunks)
  • src/endpoint/s3/s3_rest.js (0 hunks)
  • src/endpoint/s3/s3_utils.js (5 hunks)
  • src/sdk/glacier.js (1 hunks)
  • src/sdk/namespace_fs.js (7 hunks)
  • src/sdk/nb.d.ts (1 hunks)
💤 Files with no reviewable changes (1)
  • src/endpoint/s3/s3_rest.js
🧰 Additional context used
🧬 Code graph analysis (2)
src/endpoint/s3/s3_utils.js (2)
src/endpoint/s3/ops/s3_put_object.js (1)
  • storage_class (23-23)
src/endpoint/s3/ops/s3_post_object_uploads.js (1)
  • storage_class (16-16)
src/sdk/namespace_fs.js (2)
src/test/unit_tests/nsfs/test_nsfs_glacier_backend.js (4)
  • s3_utils (13-13)
  • params (223-230)
  • params (539-546)
  • params (570-578)
src/endpoint/s3/s3_utils.js (1)
  • storage_class (322-322)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Build Noobaa Image
  • GitHub Check: run-jest-unit-tests
  • GitHub Check: run-package-lock-validation
🔇 Additional comments (7)
src/sdk/namespace_fs.js (5)

1080-1086: Read gate aligned with GLACIER_STORAGE_CLASSES.

Correct and consistent with restore semantics.


1310-1321: Copy-from Glacier objects requires restore and forces fallback.

Good parity with S3 behavior. No issues.


2300-2301: Restore reply now echoes actual storage_class from xattr.

Good improvement for accuracy.


3616-3617: Force-expire-on-get covers Glacier-like classes.

Matches new model. LGTM.


3544-3547: GLACIER_IR exclusion from NSFS is intentional by design.

The GLACIER_STORAGE_CLASSES constant is specifically defined for NSFS glacier support and contains only STORAGE_CLASS_GLACIER and STORAGE_CLASS_DEEP_ARCHIVESTORAGE_CLASS_GLACIER_IR is intentionally excluded. This is consistent across the codebase: in bucketspace_fs.js, only STORAGE_CLASS_GLACIER (not GLACIER_IR or DEEP_ARCHIVE) is advertised as a supported storage class when NSFS_GLACIER_ENABLED is configured. The exclusion is not an oversight but a deliberate design choice limiting NSFS glacier support to specific storage classes.

src/endpoint/s3/s3_utils.js (1)

838-838: Verify documentation updates for new public API exports.

The exports of STORAGE_CLASS_DEEP_ARCHIVE and GLACIER_STORAGE_CLASSES at lines 838 and 882 expose new public API surface. These constants are actively used internally by SDK modules (glacier.js, namespace_fs.js, s3_get_object.js). Please confirm that:

  1. Any relevant developer or SDK documentation has been updated to document these exports
  2. If external SDK consumers depend on these constants, they have been notified
src/sdk/nb.d.ts (1)

20-20: StorageClass refactoring verified — no issues found.

The type definition and s3_utils constants are correctly aligned:

  • StorageClass union includes all four classes: STANDARD, GLACIER, GLACIER_IR, DEEP_ARCHIVE
  • s3_utils exports STORAGE_CLASS_DEEP_ARCHIVE with proper JSDoc type annotation
  • GLACIER_STORAGE_CLASSES array correctly includes both GLACIER and DEEP_ARCHIVE
  • Zero GLACIER_DA references remain in the codebase
  • Type consistency maintained throughout

Copy link
Member

@guymguym guymguym left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor comments. LGTM!

const OBJECT_ATTRIBUTES_UNSUPPORTED = Object.freeze(['Checksum', 'ObjectParts']);

/** @type {nb.StorageClass[]} */
const GLACIER_STORAGE_CLASSES = [
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a small comment worth added here I think - that this list is the classes that require restore-object, which is why GLACIER_IR is not included.

if (!storage_class) return STORAGE_CLASS_STANDARD;
if (storage_class === STORAGE_CLASS_STANDARD) return STORAGE_CLASS_STANDARD;
if (!storage_class || storage_class === STORAGE_CLASS_STANDARD) {
if (config.NSFS_GLACIER_FORCE_STORAGE_CLASS) return STORAGE_CLASS_GLACIER;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can get more flexibility if the config specifies the forced storage class name (string), and then this function can be written to validate it too

function parse_storage_class(storage_class) {
    if (config.NSFS_GLACIER_FORCE_STORAGE_CLASS) {
        storage_class = config.NSFS_GLACIER_FORCE_STORAGE_CLASS;
    }
    if (!storage_class) return STORAGE_CLASS_STANDARD;
    if (storage_class === STORAGE_CLASS_STANDARD) return STORAGE_CLASS_STANDARD;
    if (storage_class === STORAGE_CLASS_GLACIER) return STORAGE_CLASS_GLACIER;
    if (storage_class === STORAGE_CLASS_DEEP_ARCHIVE) return STORAGE_CLASS_DEEP_ARCHIVE;
    if (storage_class === STORAGE_CLASS_GLACIER_IR) return STORAGE_CLASS_GLACIER_IR;
    throw new Error(`No such s3 storage class ${storage_class}`);
}

Signed-off-by: Utkarsh Srivastava <[email protected]>

drop support for GLACIER_IR and fix migration and restore path

Signed-off-by: Utkarsh Srivastava <[email protected]>
@tangledbytes tangledbytes force-pushed the utkarsh/fix/glacier_da-storage-class branch from 65c6c42 to ae51010 Compare November 24, 2025 06:47
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
src/endpoint/s3/s3_utils.js (1)

24-56: DEEP_ARCHIVE constant and GLACIER_STORAGE_CLASSES definition look solid

Introducing STORAGE_CLASS_DEEP_ARCHIVE and GLACIER_STORAGE_CLASSES (excluding GLACIER_IR, which doesn’t require restore) matches AWS semantics and the intended “restore-required” set. This also addresses the earlier review note about documenting why GLACIER_IR is omitted.

Optionally, you can freeze the array to avoid accidental mutation:

-const GLACIER_STORAGE_CLASSES = [
-    STORAGE_CLASS_GLACIER,
-    STORAGE_CLASS_DEEP_ARCHIVE,
-];
+const GLACIER_STORAGE_CLASSES = Object.freeze([
+    STORAGE_CLASS_GLACIER,
+    STORAGE_CLASS_DEEP_ARCHIVE,
+]);
🧹 Nitpick comments (4)
config.js (1)

199-205: Clarify/align NSFS_GLACIER_FORCE_STORAGE_CLASS semantics with parser behavior

The JSDoc says this config is used “if no storage class or STANDARD storage class is provided”, but parse_storage_class() currently overrides any incoming value (including GLACIER_IR and values read from xattrs) whenever this config is set.

That broader override is fine if intentional, but then the comment is misleading. Otherwise, narrow the override to the “unset or STANDARD” case. For example in s3_utils.parse_storage_class:

-function parse_storage_class(storage_class) {
-    if (config.NSFS_GLACIER_FORCE_STORAGE_CLASS) {
-        storage_class = config.NSFS_GLACIER_FORCE_STORAGE_CLASS;
-    }
-    if (!storage_class || storage_class === STORAGE_CLASS_STANDARD) return STORAGE_CLASS_STANDARD;
+function parse_storage_class(storage_class) {
+    const forced = config.NSFS_GLACIER_FORCE_STORAGE_CLASS;
+    if (!storage_class || storage_class === STORAGE_CLASS_STANDARD) {
+        storage_class = forced || STORAGE_CLASS_STANDARD;
+    }
+    if (storage_class === STORAGE_CLASS_STANDARD) return STORAGE_CLASS_STANDARD;
src/sdk/namespace_fs.js (2)

1303-1321: Copy behavior for Glacier classes looks good; fix log prefix for clarity

The new GLACIER_STORAGE_CLASSES checks for both source and destination storage classes correctly force a data copy when Glacier-like classes are involved and reject non-restored sources.

Minor nit: the warning still says _validate_upload, which can mislead when debugging:

-                    dbg.warn('_validate_upload: object is not restored yet', src_restore_status);
+                    dbg.warn('_check_copy_storage_class: object is not restored yet', src_restore_status);

3615-3625: Consider bypassing global override when deciding Glacier force-expiry

_glacier_force_expire_on_get() now uses parse_storage_class() before checking GLACIER_STORAGE_CLASSES, so NSFS_GLACIER_FORCE_STORAGE_CLASS can cause even STANDARD objects (no xattr) to be treated as Glacier-like here.

If the intent is to base expiry strictly on the stored storage-class xattr, independent of the global override, you might prefer:

-        const storage_class = s3_utils.parse_storage_class(stat.xattr[Glacier.STORAGE_CLASS_XATTR]);
-        if (!s3_utils.GLACIER_STORAGE_CLASSES.includes(storage_class)) return;
+        const storage_class = stat.xattr[Glacier.STORAGE_CLASS_XATTR];
+        if (!s3_utils.GLACIER_STORAGE_CLASSES.includes(storage_class)) return;

Otherwise, it would be good to document that NSFS_GLACIER_FORCE_STORAGE_CLASS also affects force-expire semantics.

src/endpoint/s3/s3_utils.js (1)

390-403: Scope of parse_storage_class override may be broader than intended

parse_storage_class() now always replaces its argument with config.NSFS_GLACIER_FORCE_STORAGE_CLASS when that config is set, before any checks:

if (config.NSFS_GLACIER_FORCE_STORAGE_CLASS) {
    storage_class = config.NSFS_GLACIER_FORCE_STORAGE_CLASS;
}

Effects:

  • Explicit non-STANDARD requests (e.g., GLACIER_IR) are silently mapped to the forced class.
  • Callers that parse stored xattrs (e.g., Glacier helpers) see the forced class instead of the raw on-disk value.

Given the config JSDoc in config.js talks specifically about the “no storage class or STANDARD” case, this might be more aggressive than desired.

If you only want to override when the client sends nothing or STANDARD, consider narrowing as:

-function parse_storage_class(storage_class) {
-    if (config.NSFS_GLACIER_FORCE_STORAGE_CLASS) {
-        storage_class = config.NSFS_GLACIER_FORCE_STORAGE_CLASS;
-    }
-    if (!storage_class || storage_class === STORAGE_CLASS_STANDARD) return STORAGE_CLASS_STANDARD;
+function parse_storage_class(storage_class) {
+    const forced = config.NSFS_GLACIER_FORCE_STORAGE_CLASS;
+    if (!storage_class || storage_class === STORAGE_CLASS_STANDARD) {
+        storage_class = forced || STORAGE_CLASS_STANDARD;
+    }
+    if (storage_class === STORAGE_CLASS_STANDARD) return STORAGE_CLASS_STANDARD;

Otherwise, updating the config comment to explicitly describe this global override behavior would avoid confusion.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 65c6c42 and ae51010.

📒 Files selected for processing (7)
  • config.js (1 hunks)
  • src/endpoint/s3/ops/s3_get_object.js (1 hunks)
  • src/endpoint/s3/s3_rest.js (0 hunks)
  • src/endpoint/s3/s3_utils.js (5 hunks)
  • src/sdk/glacier.js (1 hunks)
  • src/sdk/namespace_fs.js (7 hunks)
  • src/sdk/nb.d.ts (1 hunks)
💤 Files with no reviewable changes (1)
  • src/endpoint/s3/s3_rest.js
🚧 Files skipped from review as they are similar to previous changes (3)
  • src/sdk/glacier.js
  • src/endpoint/s3/ops/s3_get_object.js
  • src/sdk/nb.d.ts
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-08T13:10:36.141Z
Learnt from: naveenpaul1
Repo: noobaa/noobaa-core PR: 9182
File: src/server/system_services/pool_server.js:1314-1317
Timestamp: 2025-08-08T13:10:36.141Z
Learning: In src/server/system_services/pool_server.js (and config usage), the constant config.INTERNAL_STORAGE_POOL_NAME has been removed from the system. Future logic should not depend on this constant and should instead use config.DEFAULT_POOL_NAME or structural markers (e.g., pool.resource_type === 'INTERNAL' or pool.mongo_info) to identify internal/mongo pools.

Applied to files:

  • config.js
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: run-package-lock-validation
  • GitHub Check: Build Noobaa Image
  • GitHub Check: run-jest-unit-tests
🔇 Additional comments (5)
src/sdk/namespace_fs.js (4)

1078-1085: Glacier read gating now correctly handles all Glacier-like classes

Using s3_utils.GLACIER_STORAGE_CLASSES.includes(obj_storage_class) to block reads until restore covers both GLACIER and DEEP_ARCHIVE and looks logically correct; no issues found.


1359-1368: Triggering migrate WAL for Glacier storage classes is consistent

Appending to the migrate WAL only when params.storage_class is in GLACIER_STORAGE_CLASSES ensures both GLACIER and DEEP_ARCHIVE uploads are tracked for migration, without affecting non-Glacier objects. This aligns with the new storage-class model.


2290-2301: Expose actual storage_class in restore_object response

Returning storage_class: Glacier.storage_class_from_xattr(stat.xattr) instead of a hard-coded GLACIER constant correctly reports DEEP_ARCHIVE vs GLACIER to clients. This is a precise and compatible change.


3541-3548: Confirm that GLACIER_IR is intentionally unsupported in NSFS

_is_storage_class_supported() now treats only storage classes in GLACIER_STORAGE_CLASSES (GLACIER, DEEP_ARCHIVE) as Glacier-like, while STORAGE_CLASS_GLACIER_IR still falls through to return false.

If NSFS buckets are not meant to accept GLACIER_IR, this is fine; otherwise you probably want to treat GLACIER_IR as supported without requiring NSFS_GLACIER_ENABLED, e.g.:

-        if (s3_utils.GLACIER_STORAGE_CLASSES.includes(storage_class)) {
-            return config.NSFS_GLACIER_ENABLED || false;
-        }
+        if (s3_utils.GLACIER_STORAGE_CLASSES.includes(storage_class)) {
+            return config.NSFS_GLACIER_ENABLED || false;
+        }
+        if (storage_class === s3_utils.STORAGE_CLASS_GLACIER_IR) {
+            return true;
+        }
src/endpoint/s3/s3_utils.js (1)

839-886: Exporting DEEP_ARCHIVE and GLACIER_STORAGE_CLASSES

Adding exports for STORAGE_CLASS_DEEP_ARCHIVE and GLACIER_STORAGE_CLASSES is consistent with their use in namespace_fs and Glacier helpers, and centralizes the storage-class definitions in a single module. No issues spotted.

@tangledbytes tangledbytes merged commit c22df71 into noobaa:master Nov 24, 2025
18 of 19 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants