-
Notifications
You must be signed in to change notification settings - Fork 0
Dev chunk optimization postprocessveppanel #390
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
…. Upgrade pybedtools. Added wave
…annotating with no flags."
…eated in create_consensus_panel.py
Implemented parallel processing of VEP annotation through configurable chunking: - Added `panel_sites_chunk_size` parameter (default: 0, no chunking) - When >0, splits sites file into chunks for parallel VEP annotation - Uses bash `split` command for efficient chunking with preserved headers - Modified SITESFROMPOSITIONS module: - Outputs multiple chunk files (*.sites4VEP.chunk*.tsv) instead of single file - Logs chunk configuration and number of chunks created - Chunk size configurable via `ext.chunk_size` in modules.config - Updated CREATE_PANELS workflow: - Flattens chunks with `.transpose()` for parallel processing - Each chunk gets unique ID for VEP tracking - Merges chunks using `collectFile` with header preservation - Added SORT_MERGED_PANEL module: - Sorts merged panels by chromosome and position (genomic order) - Prevents "out of order" errors in downstream BED operations - Applied to both compact and rich annotation outputs - Enhanced logging across chunking pipeline: - SITESFROMPOSITIONS: reports chunk_size and number of chunks created - POSTPROCESS_VEP_ANNOTATION: shows internal chunk_size and expected chunks - CUSTOM_ANNOTATION_PROCESSING: displays chr_chunk_size and processing info Configuration: - `panel_sites_chunk_size`: controls file chunking (0=disabled) - `panel_postprocessing_chunk_size`: internal memory management - `panel_custom_processing_chunk_size`: internal chromosome chunking Benefits: - Parallelizes VEP annotation for large panels - Reduces memory footprint per task - Maintains genomic sort order for downstream tools
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I went over all the files and these are some of the comments, in general I think that these are the main points:
- one bigger change is to not parallelize the processing of Ensembl VEP annotation, but keep the paralellization to splitting the input.
- Also the chunking for the custom processing of the panel is a good idea but I am not sure that the implementation is correct, it should be revised.
- Add omega snapshot as part of the test
Once these details are solved, it would be great to merge the dev branch here (solve conflicts) and confirm that all the tests are passing
- Merge with the dev branch and update the tests snapshots in case it is needed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these changes may not be required since I already updated the Nextflow module to make the failing consensus file optional.
I think I would prefer to not generate the file if there is nothing to report.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this update looks good, but I am curious to know based on which samples was this defined.
It would be great to make sure that it works well for the last 2 duplexomes and the kidney cohort with the pancancer panel for example
|
|
||
| // === SENSIBLE DEFAULTS === | ||
| // Most processes use minimal resources based on usage analysis | ||
| cpus = { 1 } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is OK, but I think we should revise that all the steps that might be able to use multiple threads get at least the chance of increasing the number of CPUs in the new attempts
(nothing to change just a heads-up on this topic)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when is this one used?
| chr_data = chr_data.drop_duplicates( | ||
| subset=['CHROM', 'POS', 'REF', 'ALT', 'MUT_ID', 'GENE', 'CONTEXT_MUT', 'CONTEXT', 'IMPACT'], | ||
| keep='first' | ||
| ) | ||
| chr_data.to_csv(customized_output_annotation_file, header=True, index=False, sep="\t") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure this does the same as it was doing before, because it is supposed to output all the same TSV table but replacing the values in some of the rows, in this case it seems that only the information from the last chromosome will be outputted, but maybe I got it wrong
| // Skip empty lines at the beginning (can happen with collectFile) | ||
| // def headerLine = lines.find { it.trim() != "" } | ||
| // assert headerLine != null : "Omega output should contain a header" | ||
| // def header = headerLine.split('\t') | ||
| // assert header.contains("gene") : "Omega output should contain 'gene' column" | ||
| // assert header.contains("sample") : "Omega output should contain 'sample' column" | ||
| // assert header.contains("dnds") : "Omega output should contain 'dnds' column" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
with the update in omega, we could check for a snapshot of the file here as well
| } | ||
|
|
||
| params { | ||
| panel_postprocessing_chunk_size = 100000000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would remove this parameter since it is complex to manage it properly
| panel_postprocessing_chunk_size = 100000000 |
| min_muts_per_sample = 0 | ||
| selected_genes = '' | ||
| panel_with_canonical = true | ||
| panel_postprocessing_chunk_size = 100000 // a very big number will avoid chunking by default |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as I said in other places, I would remove this parameter
| panel_postprocessing_chunk_size = 100000 // a very big number will avoid chunking by default |
| max_memory = 950.GB | ||
| max_cpus = 196 | ||
| max_time = 30.d |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand this needs to be changed by the user, but maybe we should switch it to more realistic thresholds no?
| "panel_postprocessing_chunk_size": { | ||
| "type": "integer", | ||
| "description": "Internal chunk size for VEP postprocessing memory management", | ||
| "default": 100000, | ||
| "fa_icon": "fas fa-memory", | ||
| "help_text": "Controls how the panel_postprocessing_annotation.py script processes data internally. Higher values use more memory but may be faster. Not related to file-level chunking." | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| "panel_postprocessing_chunk_size": { | |
| "type": "integer", | |
| "description": "Internal chunk size for VEP postprocessing memory management", | |
| "default": 100000, | |
| "fa_icon": "fas fa-memory", | |
| "help_text": "Controls how the panel_postprocessing_annotation.py script processes data internally. Higher values use more memory but may be faster. Not related to file-level chunking." | |
| }, |
[copilot generated]
Performance Optimization: Chunked Processing for Large Panel Annotations
Overview
This PR introduces memory-efficient chunked processing for VEP annotation post-processing, enabling the pipeline to handle arbitrarily large panel annotations without memory constraints.
Changes Summary
✅ Implemented Chunking Optimizations
1.
panel_postprocessing_annotation.py- Chunked VEP Output ProcessingTechnical details:
Process:
CREATEPANELS:POSTPROCESSVEPPANELVCFANNOTATEPANEL2.
panel_custom_processing.py- Chromosome-Based Chunked LoadingTechnical details:
Process:
CUSTOMPROCESSING/CUSTOMPROCESSINGRICH❌ VEP Cache Storage Location - No Performance Impact
What was tested:
/workspace/datasets/vepor/data/bbg/datasets/vep)Results:
ENSEMBLVEP_VEPprocessCommits:
035a0c7(April 3, 2025): Added VEP cache beegfs support8e40d83(April 24, 2025): Removed VEP cache beegfs optimization (no benefit)Current approach:
params.vep_cacheResource Configuration
Updated resource limits for chunked processes:
Integration Points
Affected Subworkflows:
CREATEPANELS→POSTPROCESSVEPPANEL→ processes VEP output in chunksCUSTOMPROCESSING/CUSTOMPROCESSINGRICH→ uses chunked loading for custom regionsPipeline Flow:
Testing
Tested on:
Validation:
Performance Impact
Migration Notes
No breaking changes. Existing pipelines continue to work with improved memory efficiency.
Related Commits
276152d: Chunking forpanel_custom_processing.py035a0c7: VEP cache beegfs attempt (added)8e40d83: VEP cache beegfs removal (no performance gain)1dffd94,945c129,d243ebc, etc. (resource tuning)Conclusion
This PR successfully implements memory-efficient chunked processing for panel annotation post-processing, enabling the pipeline to scale to arbitrarily large panels without memory constraints. The VEP cache storage location experiment confirmed that computation, not I/O, is the bottleneck for annotation runtime.