Skip to content

Vision Filter Extraction & Weighted Scoring System#161

Closed
caparomula wants to merge 18 commits intomainfrom
vision-tests
Closed

Vision Filter Extraction & Weighted Scoring System#161
caparomula wants to merge 18 commits intomainfrom
vision-tests

Conversation

@caparomula
Copy link
Contributor

Summary

Extracts vision observation filtering and scoring logic into a standalone, testable VisionFilter class with a new weighted geometric mean scoring system.

Key Changes

Weighted Test Scoring Architecture

  • Each vision test now has an individual weight (0.5–1.0) reflecting its importance:
    • withinBoundaries, moreThanZeroTags: weight 1.0 (hard requirements)
    • velocityConsistency: weight 0.9
    • unambiguous: weight 0.8
    • pitchError, rollError, heightError: weight 0.7
    • distanceToTags: weight 0.5
  • Scores combined via weighted geometric mean — a single low-scoring test pulls down the overall score proportionally to its weight, rather than using simple averaging or pass/fail logic

Refactoring

  • Extract VisionFilter class from Vision subsystem for isolated unit testing
  • TestContext pattern for zero-allocation test evaluation
  • Cross-camera correlation boosting when multiple cameras agree on pose

Unit Tests (VisionFilterTest.java)

  • Comprehensive tests for all scoring behaviors
  • Per-camera velocity consistency with configurable test sets
  • Boundary, tag filtering, ambiguity, and correlation tests

Analysis Tooling

  • WPILOG comparison script (compare_vision_logs.py) with false positive detection
  • Vision replay analysis documentation

Bug Fixes

  • Fix game state remaining floor issue
  • Remove unused fused pose supplier from Vision

caparomula and others added 18 commits March 19, 2026 18:46
Reject vision pose estimates that imply physically impossible robot
velocities by comparing successive samples from the same camera.

Changes:
- Add velocityConsistency test to VisionTest enum that penalizes
  observations implying velocity > 130% of max drive speed
- Introduce TestContext class with fluent API to pass camera-specific
  state (lastAcceptedPose, lastAcceptedTimestamp) to vision tests
- Refactor VisionTest.test() to accept TestContext, enabling tests
  that require per-camera historical data
- Add configurable EnumSet<VisionTest> enabledTests to control which
  tests are applied (allows disabling tests for debugging/tuning)
- Simplify test loop to iterate over enabledTests set
- Add constants: maxReasonableVelocityMps, velocityCheckTimeoutSeconds

The velocity check uses a 500ms timeout to avoid penalizing observations
after camera gaps, and ignores dt <= 1ms to handle same-frame cases.
- Move scoring logic from Vision.java to VisionFilter.java
- Add weighted geometric mean for combining test scores
- Add velocity consistency test (penalizes impossible movement)
- Add cross-camera correlation boost (rewards agreement)
- Add scripts/compare_vision_logs.py for parsing WPILOG files and
  comparing vision processing between real and replay logs
- Add doc/VISION_REPLAY_ANALYSIS.md documenting methodology and
  results from comparing original vs new VisionFilter behavior

Analysis shows new weighted geometric mean scoring produces
meaningful score distribution (mean 0.72 vs 0.06) while being
more selective (-40% observations accepted).
Create scripts/compare_vision_logs.py to analyze vision processing
differences between real robot logs and AdvantageKit replay logs.

Features:
- Parse WPILOG binary format and decode Pose3d struct arrays
- Compare RealOutputs vs ReplayOutputs score distributions
- Detect false positives by analyzing accepted pose trajectories
  for impossible velocities and discontinuities

Key findings from replay analysis:
- New filter rejects 40% of observations old code accepted
- Mean score increased from 0.058 to 0.72 (+1144%)
- Found 140 impossible velocity violations in old code's accepted
  poses (max 171 m/s), confirming the velocity consistency check
  catches genuine false positives

Add doc/VISION_REPLAY_ANALYSIS.md documenting methodology and results.
Update VisionIOPhotonVisionSim to match real OV2311 camera specs:
- 800x600 resolution, 70° diagonal FOV
- 35 FPS, 30ms average latency with 5ms jitter
- Calibration error for detection noise (0.25 ± 0.08 px)
Log analysis of akit_26-03-18 revealed that minScore=0.02 was effectively
not filtering — only the binary withinBoundaries and moreThanZeroTags gates
caused rejections. 52 bad poses (PnP ambiguity errors, off by 1.8-7.1m)
scored 0.60-0.66 and were accepted, causing 11 pose teleportation events.

Two changes:
- velocityConsistency returns 0.7 (uncertainty) instead of 1.0 when a
  camera has no recent history, closing the loophole that gave bad
  first-in-a-while poses a free pass
- minScore raised from 0.02 to 0.6, which should reject the penalized
  bad poses (~0.58) while preserving good observations (~0.71+)

Not yet validated on the robot. See doc/VISION_FILTER_TUNING.md for
full analysis and tuning guide.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Compare vision yaw to gyro yaw to catch ambiguous PnP poses. Eliminates
pose jumps caused by single-tag ambiguity in replay testing.
Log replay analysis showed velocityConsistency provided zero value
beyond yawConsistency - identical filtering results with/without it.
Also switched from getHeading to getRawGyroRotation for yaw consistency
since vision reports field-absolute yaw, not offset-adjusted heading.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Vision filter: accept vision observations when multiple cameras report a similar pose

2 participants