Vision Filter Extraction & Weighted Scoring System#161
Closed
caparomula wants to merge 18 commits intomainfrom
Closed
Vision Filter Extraction & Weighted Scoring System#161caparomula wants to merge 18 commits intomainfrom
caparomula wants to merge 18 commits intomainfrom
Conversation
This was
linked to
issues
Mar 17, 2026
126e7ef to
e48ee12
Compare
Reject vision pose estimates that imply physically impossible robot velocities by comparing successive samples from the same camera. Changes: - Add velocityConsistency test to VisionTest enum that penalizes observations implying velocity > 130% of max drive speed - Introduce TestContext class with fluent API to pass camera-specific state (lastAcceptedPose, lastAcceptedTimestamp) to vision tests - Refactor VisionTest.test() to accept TestContext, enabling tests that require per-camera historical data - Add configurable EnumSet<VisionTest> enabledTests to control which tests are applied (allows disabling tests for debugging/tuning) - Simplify test loop to iterate over enabledTests set - Add constants: maxReasonableVelocityMps, velocityCheckTimeoutSeconds The velocity check uses a 500ms timeout to avoid penalizing observations after camera gaps, and ignores dt <= 1ms to handle same-frame cases.
- Move scoring logic from Vision.java to VisionFilter.java - Add weighted geometric mean for combining test scores - Add velocity consistency test (penalizes impossible movement) - Add cross-camera correlation boost (rewards agreement)
- Add scripts/compare_vision_logs.py for parsing WPILOG files and comparing vision processing between real and replay logs - Add doc/VISION_REPLAY_ANALYSIS.md documenting methodology and results from comparing original vs new VisionFilter behavior Analysis shows new weighted geometric mean scoring produces meaningful score distribution (mean 0.72 vs 0.06) while being more selective (-40% observations accepted).
Create scripts/compare_vision_logs.py to analyze vision processing differences between real robot logs and AdvantageKit replay logs. Features: - Parse WPILOG binary format and decode Pose3d struct arrays - Compare RealOutputs vs ReplayOutputs score distributions - Detect false positives by analyzing accepted pose trajectories for impossible velocities and discontinuities Key findings from replay analysis: - New filter rejects 40% of observations old code accepted - Mean score increased from 0.058 to 0.72 (+1144%) - Found 140 impossible velocity violations in old code's accepted poses (max 171 m/s), confirming the velocity consistency check catches genuine false positives Add doc/VISION_REPLAY_ANALYSIS.md documenting methodology and results.
Update VisionIOPhotonVisionSim to match real OV2311 camera specs: - 800x600 resolution, 70° diagonal FOV - 35 FPS, 30ms average latency with 5ms jitter - Calibration error for detection noise (0.25 ± 0.08 px)
Log analysis of akit_26-03-18 revealed that minScore=0.02 was effectively not filtering — only the binary withinBoundaries and moreThanZeroTags gates caused rejections. 52 bad poses (PnP ambiguity errors, off by 1.8-7.1m) scored 0.60-0.66 and were accepted, causing 11 pose teleportation events. Two changes: - velocityConsistency returns 0.7 (uncertainty) instead of 1.0 when a camera has no recent history, closing the loophole that gave bad first-in-a-while poses a free pass - minScore raised from 0.02 to 0.6, which should reject the penalized bad poses (~0.58) while preserving good observations (~0.71+) Not yet validated on the robot. See doc/VISION_FILTER_TUNING.md for full analysis and tuning guide. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Compare vision yaw to gyro yaw to catch ambiguous PnP poses. Eliminates pose jumps caused by single-tag ambiguity in replay testing.
Log replay analysis showed velocityConsistency provided zero value beyond yawConsistency - identical filtering results with/without it. Also switched from getHeading to getRawGyroRotation for yaw consistency since vision reports field-absolute yaw, not offset-adjusted heading.
ee1abe9 to
6da17ea
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Extracts vision observation filtering and scoring logic into a standalone, testable
VisionFilterclass with a new weighted geometric mean scoring system.Key Changes
Weighted Test Scoring Architecture
withinBoundaries,moreThanZeroTags: weight 1.0 (hard requirements)velocityConsistency: weight 0.9unambiguous: weight 0.8pitchError,rollError,heightError: weight 0.7distanceToTags: weight 0.5Refactoring
VisionFilterclass fromVisionsubsystem for isolated unit testingTestContextpattern for zero-allocation test evaluationUnit Tests (
VisionFilterTest.java)Analysis Tooling
compare_vision_logs.py) with false positive detectionBug Fixes