-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modify preconsume script to work on one cohort at a time #1107
Modify preconsume script to work on one cohort at a time #1107
Conversation
@@ -124,14 +125,18 @@ function consume_hardcoded_samples() { | |||
rm -f ${PROBLEMATIC_EVENT_CONSUME_IDS_FILEPATH} ${PROBLEMATIC_METADATA_CONSUME_IDS_FILEPATH} | |||
touch ${PROBLEMATIC_EVENT_CONSUME_IDS_FILEPATH} | |||
touch ${PROBLEMATIC_METADATA_CONSUME_IDS_FILEPATH} | |||
echo "P-0025907-N01-IM6" >> "${PROBLEMATIC_METADATA_CONSUME_IDS_FILEPATH}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean, know this was here before, but any idea what's this for 😆 Is it not getting caught by our usual script
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I looked into it, this is a sample we're receiving from the tumor server but it has a normal identifier. Rob added it to this consume_hardcoded_samples
function until we got more info/a fix from CVR. I reached out to Mihir about it again and he said to continue consuming it and he'll get back to me if there's any change
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
small suggestion for change, but otherwise looks good!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work. My only question is whether the script detect_samples_with_problematic_metadata.py will run correctly on the impact / impact-heme / access cohorts. If we don't want to worry about whether it runs correctly or not we could skip over the scan for metadata problems in these datasets. But the metadata is at the top level of the json object returned for samples, so it probably is compatible to all cohorts.
} | ||
|
||
function detect_samples_with_problematic_metadata() { | ||
$DETECT_SAMPLES_WITH_PROBLEMATIC_METADATA_SCRIPT_FILEPATH ${ARCHER_FETCH_OUTPUT_FILEPATH} ${ARCHER_CONSUME_IDS_FILEPATH} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess before we were only scanning archer for genepanel=UNKNOWN. This change seems to mean that we will scan all 4 cohorts for bad genepanel references (probably a good thing). Does this run smoothly though on other cohorts? (The json schema may differ)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, with the way I rewrote it, all cohorts will be checked for problematic events + problematic metadata. I figured it made sense since we already have the functionality. I didn't run into any errors testing on the 4 cohorts but the queues that I pulled didn't have any issues in them so I might need to test a bit more to make sure
author Manda Wilson <[email protected]> 1703199176 -0500 committer Robert Sheridan <[email protected]> 1711560265 -0400 upgrade to java 21 switch to genome-nexus-annotation-pipeline that uses new maf repo updated to spring 6, spring batch 5, spring boot 3 to match cbioportal fix typos Updates to AZ-MSKIMPACT to integrate with CDM (knowledgesystems#1098) Fix bug in checking for duplicate Mutation Records (knowledgesystems#1099) * Check if mutationRecord is duplicated before annotating * Populate mutationMap in loadMutationRecordsFromJson * add addRecordToMap * Remove comments, add local vars for debugging * Remove duplicate MAF variants for AZ * Fix remove-duplicate-maf-variants call * revert whitespace change updates for migrating darwin and crdb to java11 (knowledgesystems#1080) pom changes for pulling moved dependencies changes to java args to silence warnings Co-authored-by: cbioportal import user <[email protected]> Remove Annotated MAF before Import (knowledgesystems#958) * remove annotated MAF to prevent duplicate * Update subset_and_merge_crdb_pdx_studies.py --------- Co-authored-by: Avery Wang <[email protected]> Script to combine arbitrary files (knowledgesystems#1104) * Script to combine arbitrary files * Modify unit tests to work with script changes * Remove unnecessary column specifier * Fix syntax bug Add sophia script (knowledgesystems#1105) * Add sophia script * rename transpose_cna file * Add filter-clinical-arg-functions script * Add az var to correct automation environment * Add correct path to transpose_cna script * Call seq_date function * Add seq_date before filtering columns * syntax fix * Fix call to filter out clinical attribute columns * Fix nonsigned out file path * Automate folder name * directory fixes * remove quotes? * change date formatting * output filepath for duplicate variants script * use az_msk_impact_data_home var * move sophia_data_home to automation environment * Add comments * Change dir structures in sophia script to match new repo structure * Add git operations * Remove test file * Fix dirs for sophia zip command * remove quotes * Zip files before cleanup * move zip step before git push Add script for merging Dremio/SMILE into cmo-access (knowledgesystems#1102) - adds cfdna clinical and timeline data from dremio/SMILE - converts patient identifiers using "dmp over cmo" identifier logic from dremio - dremio patient id mapping table export code called to produce mapping table - main script then calls update_cfdna_clinical_sample_patient_ids_via_dremio.sh - merge.py used to combine clinical data from dremio with clinical data from cmo-access - metadata headers added using new script : merge_clinical_metadata_headers_py3.py - other import process flow (similar to other import scripts) followed - error detection step added after debugging for sporadic data loss in results Co-authored-by: Manda Wilson <[email protected]> Modify preconsume script to work on one cohort at a time (knowledgesystems#1107) Call correct function name add options for logging in for different accounts Preconsume archer-solid-cv4 and add fetch loop (knowledgesystems#1129) * Handle archer-solid-cv4 samples * Add loop * move each cohort to its own dir and fix filename switch to genome-nexus-annotation-pipeline that uses new maf repo use updated genome-nexus-annotation-pipeline update version of cmo-pipelines to 1.0.0 Convert BatchConfiguration to new Spring Batch format drop unneeded dependency from redcap removed gdd, updated crdb and ddp batch configs to spring batch 5 removed commons-lang start of converting cvr to spring batch 5 fix cvr fetcher BatchConfiguration fixed redcap pipeline spring batch 5 configuration make spring-batch-integration match batch version Co-authored-by: Manda Wilson <[email protected]> drop darwin fetcher (and docs/scripts)
This PR :
preconsume_problematic_samples.sh
script to work on one cohort at a time.fetch-dmp-data-for-import.sh
script to call thepreconsume_problematic_samples.sh
once for each cohort, immediately before the CVR fetch for that cohort.