LangSplatv2: Implement mask + CLIP feature transforms#42
Merged
swahtz merged 7 commits intoopenvdb:mainfrom Feb 9, 2026
Merged
LangSplatv2: Implement mask + CLIP feature transforms#42swahtz merged 7 commits intoopenvdb:mainfrom
swahtz merged 7 commits intoopenvdb:mainfrom
Conversation
Implement sam2 and clip data transforms that work in a pipeline together to produce the features we need for langsplatv2 closes openvdb#31 Signed-off-by: Jonathan Swartz <jonathan@jswartz.info>
Signed-off-by: Jonathan Swartz <jonathan@jswartz.info>
Contributor
There was a problem hiding this comment.
Pull request overview
Adds a LangSplatV2-style preprocessing pipeline by introducing scene transforms to generate multi-scale SAM2 masks and compute CLIP features for masked regions, plus configuration and packaging scaffolding.
Changes:
- Introduces
ComputeMultiScaleSAM2Masksto generate and cache multi-scale SAM2 masks with NMS post-processing. - Introduces
ComputeCLIPFeaturesto encode masked regions with OpenCLIP and cache features + per-scale segmentation maps. - Adds LangSplatV2 preprocessing config dataclasses and a new Python package definition.
Reviewed changes
Copilot reviewed 6 out of 7 changed files in this pull request and generated 10 comments.
Show a summary per file
| File | Description |
|---|---|
| open_vocabulary_segmentation/langsplatv2/pyproject.toml | Adds packaging metadata and dependencies for the new langsplatv2 module. |
| open_vocabulary_segmentation/langsplatv2/langsplatv2/scene_transforms/multi_scale_sam_masks.py | New transform to compute + cache multi-scale SAM2 masks and apply mask NMS. |
| open_vocabulary_segmentation/langsplatv2/langsplatv2/scene_transforms/clip_feature_encoding.py | New transform to compute + cache CLIP features for masked regions and build segmentation index maps. |
| open_vocabulary_segmentation/langsplatv2/langsplatv2/scene_transforms/init.py | Exposes the new transforms at the package level. |
| open_vocabulary_segmentation/langsplatv2/langsplatv2/config.py | Adds pipeline configuration and a helper to assemble the transform sequence. |
| instance_segmentation/garfvdb/garfvdb/util.py | Refactors RGB→SH conversion to use module-level constants. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
open_vocabulary_segmentation/langsplatv2/langsplatv2/scene_transforms/clip_feature_encoding.py
Outdated
Show resolved
Hide resolved
open_vocabulary_segmentation/langsplatv2/langsplatv2/scene_transforms/clip_feature_encoding.py
Outdated
Show resolved
Hide resolved
open_vocabulary_segmentation/langsplatv2/langsplatv2/scene_transforms/multi_scale_sam_masks.py
Show resolved
Hide resolved
open_vocabulary_segmentation/langsplatv2/langsplatv2/scene_transforms/clip_feature_encoding.py
Show resolved
Hide resolved
open_vocabulary_segmentation/langsplatv2/langsplatv2/scene_transforms/clip_feature_encoding.py
Outdated
Show resolved
Hide resolved
open_vocabulary_segmentation/langsplatv2/langsplatv2/scene_transforms/clip_feature_encoding.py
Outdated
Show resolved
Hide resolved
open_vocabulary_segmentation/langsplatv2/langsplatv2/scene_transforms/multi_scale_sam_masks.py
Outdated
Show resolved
Hide resolved
…nsforms/clip_feature_encoding.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Signed-off-by: Jonathan Swartz <jonathan@jswartz.info>
…nsforms/clip_feature_encoding.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Signed-off-by: Jonathan Swartz <jonathan@jswartz.info>
Signed-off-by: Jonathan Swartz <jonathan@jswartz.info>
5 tasks
swahtz
added a commit
that referenced
this pull request
Feb 12, 2026
## Summary Implements the LangSplatV2 training pipeline for learning per-Gaussian sparse language feature fields using fVDB and fvdb-reality-capture. This builds on the preprocessing transforms already merged in #42, adding the model, loss, training loop, and supporting utilities needed to train language-aware Gaussian splats. closes #32 Key components: - **Model** (`model.py`): `LangSplatV2Model` wraps a frozen `GaussianSplat3d` with learnable per-Gaussian logits and codebooks. Renders sparse coefficient weight maps via splatting and decodes them into dense CLIP feature maps through codebook lookup. - **Vector quantization** (`vq_utils.py`): Implements `softmax_to_topk_soft_code` for efficient sparse coefficient generation and `ResidualVectorQuantization` for K-means codebook initialization from ground-truth CLIP features. - **Loss** (`loss.py`): Cosine similarity and L1 losses with per-pixel masking for regions without valid language features. - **Dataset** (`training/dataset.py`): `LangSplatV2Dataset` loads pre-computed CLIP features and segmentation maps in compact form. Dense ground-truth feature maps are materialized on-device after transfer using `build_feature_map`, avoiding large CPU-to-GPU transfers. - **Training runner** (`training/trainer.py`): `LangSplatV2Training` handles the full workflow — dataset construction, K-means codebook initialization, optimizer setup, training/eval loops with gradient accumulation, and checkpointing. - **Config** (`config.py`): Extended with `LangSplatV2ModelConfig` and `LangSplatV2TrainingConfig` dataclasses. - **Entry point** (`train_langsplatv2.py`): CLI script using `tyro` for launching training. ### Performance optimizations - Compact feature storage with `JaggedTensor` for variable-length per-image features, avoiding padding overhead - GPU-side dense feature map construction (`build_feature_map`) using `torch.empty` to eliminate costly zero-fill of ~4 GB tensors ## Test plan - [x] Run training on a preprocessed scene: `python train_langsplatv2.py --scene-dir <path> --checkpoint-dir <path>` - [x] Verify K-means codebook initialization completes and logs cluster info - [x] Confirm training loop progresses without OOM on a single GPU (tested with 1080p images) - [x] Check that checkpoints are saved and can be resumed - [x] Profile with Nsight Systems to verify no unexpected data transfer bottlenecks --------- Signed-off-by: Jonathan Swartz <jonathan@jswartz.info> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Implements the preprocessing pipeline to generate the SAM2 masks and CLIP feature generation as transforms in the same style as garfvdb's implementation.
closes #31