-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
parallel guided-merge & better report #31
Conversation
Caution Review failedThe pull request is closed. WalkthroughThe pull request introduces several modifications across multiple files, primarily enhancing functionality and error handling. In the CI workflow configuration, the release job has been updated to improve version extraction and enable automatic generation of release notes. The Changes
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Outside diff range and nitpick comments (3)
.github/workflows/CI.yaml (1)
134-135
: Consider customizing the auto-generated release notes.The switch to auto-generated release notes is a good improvement. However, you might want to consider adding a custom template to better highlight the significant changes in this PR, such as:
- Parallel processing in guided-merge
- Duplicate signature detection
- Improved QC reporting for lowQ signatures
You can customize the auto-generated notes using a
.github/release.yml
configuration file. Would you like me to provide an example template?src/snipe/api/multisig_reference_QC.py (1)
715-718
: Fix indentation for better readability.The indentation of the conditional expression is inconsistent.
- ycoverage = ((len(ychr_in_sample) / len(ychr_specific_kmers)) / (len(sample_autosomal_sig) / len(autosomals_specific_kmers)) - if len(ychr_specific_kmers) > 0 and len(autosomals_specific_kmers) > 0 else 0 - ) + ycoverage = ( + (len(ychr_in_sample) / len(ychr_specific_kmers)) / (len(sample_autosomal_sig) / len(autosomals_specific_kmers)) + if len(ychr_specific_kmers) > 0 and len(autosomals_specific_kmers) > 0 else 0 + )src/snipe/cli/cli_ops.py (1)
1177-1180
: Simplify the calculation oftotal_skipped
Lines 1177-1180 can be simplified using a generator expression for better readability.
Apply this diff to simplify the code:
- total_skipped = 0 - for r in results: - total_skipped += len(r.get('skipped_signatures', [])) + total_skipped = sum(len(r.get('skipped_signatures', [])) for r in results)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (3)
- .github/workflows/CI.yaml (2 hunks)
- src/snipe/api/multisig_reference_QC.py (8 hunks)
- src/snipe/cli/cli_ops.py (5 hunks)
🧰 Additional context used
🪛 Ruff
src/snipe/cli/cli_ops.py
1181-1181: f-string without any placeholders
Remove extraneous
f
prefix(F541)
🔇 Additional comments (7)
.github/workflows/CI.yaml (1)
102-106
: LGTM! Improved version extraction script readability.The version extraction script has been nicely reformatted using line continuation for better readability while maintaining the same functionality.
Also applies to: 109-109
src/snipe/api/multisig_reference_QC.py (5)
418-427
: LGTM! Robust error handling for genome statistics calculations.The added null checks and division-by-zero prevention improve the robustness of the calculations.
457-465
: LGTM! Robust error handling for amplicon statistics calculations.The added null checks and safe dictionary access improve the robustness of the calculations.
467-474
: LGTM! Robust error handling for relative coverage calculations.The added null checks and safe dictionary access improve the robustness of the calculations.
499-507
: LGTM! Robust error handling for error and contamination indices.The added null checks and division-by-zero prevention improve the robustness of the calculations.
772-776
: LGTM! Robust error handling for unexplained variance calculation.The added null checks and division-by-zero prevention improve the robustness of the calculations.
src/snipe/cli/cli_ops.py (1)
9-11
: Imports are appropriate for added functionalitiesThe added imports (
csv
,tqdm
,concurrent.futures
) are necessary for the new features implemented and are correctly included.
@@ -614,7 +623,7 @@ def sort_chromosomes(chrom_dict): | |||
# calculate the CV for the whole sample | |||
if autosomal_chr_to_mean_abundance: | |||
mean_abundances = np.array(list(autosomal_chr_to_mean_abundance.values()), dtype=np.float64) | |||
cv = np.std(mean_abundances) / np.mean(mean_abundances) if np.mean(mean_abundances) != 0 else 0.0 | |||
cv = np.std(mean_abundances) / np.mean(mean_abundances) if np.mean(mean_abundances) > 0 and not np.isnan(np.mean(mean_abundances)) else 0.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider simplifying the CV calculation condition.
The condition can be made more readable while maintaining the same checks.
-cv = np.std(mean_abundances) / np.mean(mean_abundances) if np.mean(mean_abundances) > 0 and not np.isnan(np.mean(mean_abundances)) else 0.0
+mean = np.mean(mean_abundances)
+cv = np.std(mean_abundances) / mean if mean > 0 and not np.isnan(mean) else 0.0
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
cv = np.std(mean_abundances) / np.mean(mean_abundances) if np.mean(mean_abundances) > 0 and not np.isnan(np.mean(mean_abundances)) else 0.0 | |
mean = np.mean(mean_abundances) | |
cv = np.std(mean_abundances) / mean if mean > 0 and not np.isnan(mean) else 0.0 |
src/snipe/cli/cli_ops.py
Outdated
total_skipped += len(r.get('skipped_signatures', [])) | ||
|
||
click.echo(f"\nGuided Merge Summary:") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove unnecessary f
prefix from f-string without placeholders
Line 1181 uses an f-string but doesn't contain any placeholders. This can be simplified by removing the f
prefix to improve code readability.
Apply this diff to fix the issue:
- click.echo(f"\nGuided Merge Summary:")
+ click.echo("\nGuided Merge Summary:")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
click.echo(f"\nGuided Merge Summary:") | |
click.echo("\nGuided Merge Summary:") |
🧰 Tools
🪛 Ruff
1181-1181: f-string without any placeholders
Remove extraneous
f
prefix(F541)
@@ -881,9 +979,10 @@ def common(ctx, sig_files, sigs_from_file, reset_abundance, trim_singletons, | |||
@click.option('--trim-below-median', is_flag=True, default=False, help='Trim hashes below the median abundance.') | |||
@click.option('--debug', is_flag=True, default=False, help='Enable debugging and detailed logging.') | |||
@click.option('--force', is_flag=True, default=False, help='Overwrite existing files in the output directory.') | |||
@click.option('--cores', type=int, default=1, help='Number of cores to use for processing experiments in parallel.') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add input validation for the --cores
option to ensure it's a positive integer
The --cores
option currently accepts any integer value, which could be less than 1. Negative or zero values for --cores
would lead to unexpected behavior or exceptions. To prevent this, add validation to ensure the input is a positive integer greater than zero.
Apply this diff to add input validation using click.IntRange
:
@click.option(
- '--cores', type=int, default=1, help='Number of cores to use for processing experiments in parallel.'
+ '--cores', type=click.IntRange(1), default=1, help='Number of cores to use for processing experiments in parallel.'
)
Alternatively, validate within the function:
def guided_merge(ctx, table, output_dir, reset_abundance, trim_singletons,
min_abund, max_abund, trim_below_median, debug, force, cores):
+ if cores < 1:
+ logger.error(f"Invalid number of cores: {cores}. Number of cores must be a positive integer.")
+ click.echo(f"Error: Invalid number of cores: {cores}. Number of cores must be a positive integer.", err=True)
+ sys.exit(1)
Committable suggestion was skipped due to low confidence.
src/snipe/cli/cli_ops.py
Outdated
with ProcessPoolExecutor(max_workers=cores) as executor: | ||
future_to_exp = {executor.submit(process_experiment, args): args[0] for args in experiments_args} | ||
for future in tqdm(as_completed(future_to_exp), total=len(future_to_exp), desc="Processing experiments"): | ||
exp_name = future_to_exp[future] | ||
try: | ||
result = future.result() | ||
results.append(result) | ||
except Exception as e: | ||
results.append({ | ||
'exp_name': exp_name, | ||
'merged_signatures': [os.path.basename(p) for p in experiment_mapping[exp_name]], | ||
'skipped_signatures': [], | ||
'output_file': None, | ||
'status': 'failure', | ||
'error': str(e) | ||
}) | ||
else: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ensure proper handling of the ProcessPoolExecutor
in multiprocessing
When using ProcessPoolExecutor
, it's important to handle exceptions that may occur in worker processes. The current implementation correctly captures exceptions, but consider adding more detailed logging or error handling to aid debugging if a worker process fails unexpectedly.
You might enhance exception handling as follows:
with ProcessPoolExecutor(max_workers=cores) as executor:
future_to_exp = {executor.submit(process_experiment, args): args[0] for args in experiments_args}
for future in tqdm(as_completed(future_to_exp), total=len(future_to_exp), desc="Processing experiments"):
exp_name = future_to_exp[future]
try:
result = future.result()
results.append(result)
except Exception as e:
+ logger.exception(f"Processing of experiment '{exp_name}' failed with an exception.")
results.append({
'exp_name': exp_name,
'merged_signatures': [os.path.basename(p) for p in experiment_mapping[exp_name]],
'skipped_signatures': [],
'output_file': None,
'status': 'failure',
'error': str(e)
})
This adds logging of the exception traceback, which can be very helpful during debugging.
Committable suggestion was skipped due to low confidence.
snipe ops guided-merge
:--cores
.snipe qc
: