You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a downstream experiment for the seqair pileup aggregation API in my seqair fork: sstadick/seqair#1.
The branch pins seqair / seqair-types to my seqair fork commit 6d9251c and changes only the non-mate base-depth --seqair-pileup path to use the new custom accumulator API:
PileupEngine::pileup_with(...)
SeqairPileupPositionAccumulator
The existing mate-aware base-depth -m --seqair-pileup path still uses materialized PileupColumns because mate fixing needs per-column grouping by QNAME.
Why
Previously, non-mate base-depth --seqair-pileup materialized a PileupColumn and then perbase looped over column.raw_alignments() to compute depth, base counts, insertions, deletions, refskips, and fail counts.
With the accumulator API, perbase computes those row counts while seqair is already walking emitted pileup observations. For simple non-mate base-depth, this avoids materializing a public pileup column and avoids a downstream second pass over alignments.
Correctness checks
The existing htslib-vs-seqair process-region parity test now exercises the accumulator path for non-mate cases and the old materialized path for mate-aware cases.
The empty-SEQ regression test now compares all three paths:
Compared with the previous seqair v0.1.0 materialized-column run on this same benchmark, non-mate base-depth --seqair-pileup moved from 5.323 ± 0.127 s to 5.104 ± 0.166 s while preserving exact parity. The main intended win here is avoiding the downstream second pass/materialized column path for simple non-mate counting.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary 🤖
This is a downstream experiment for the seqair pileup aggregation API in my seqair fork: sstadick/seqair#1.
The branch pins
seqair/seqair-typesto my seqair fork commit6d9251cand changes only the non-matebase-depth --seqair-pileuppath to use the new custom accumulator API:PileupEngine::pileup_with(...)SeqairPileupPositionAccumulatorThe existing mate-aware
base-depth -m --seqair-pileuppath still uses materializedPileupColumns because mate fixing needs per-column grouping by QNAME.Why
Previously, non-mate
base-depth --seqair-pileupmaterialized aPileupColumnand then perbase looped overcolumn.raw_alignments()to compute depth, base counts, insertions, deletions, refskips, and fail counts.With the accumulator API, perbase computes those row counts while seqair is already walking emitted pileup observations. For simple non-mate base-depth, this avoids materializing a public pileup column and avoids a downstream second pass over alignments.
Correctness checks
The existing htslib-vs-seqair process-region parity test now exercises the accumulator path for non-mate cases and the old materialized path for mate-aware cases.
The empty-SEQ regression test now compares all three paths:
Local validation
Results:
Benchmarking
Benchmark numbers are posted as a PR comment.