Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(analysis): Add Measurements Retention Limit Option for Metrics #1729

Merged
merged 1 commit into from
Jan 12, 2022

Conversation

agrawroh
Copy link
Member

Description

It could be very useful to retain the maximum data points while running an analysis in the dry-run mode to understand what caused a particular metric to error out or why the final result came out as inconclusive.

Currently, the controller only retains the ten latest measurements and it's not possible to configure/change this value as its hard-coded in the code.

Changes

This PR adds a new option called measurementRetention which can be used to retain other than the latest ten results for the metrics running in any mode (dry/non-dry). Setting this option to 0 would disable it and, the controller will revert back to the existing behavior of retaining the latest ten measurements.

Checklist

  • Either (a) I've created an enhancement proposal and discussed it with the community, (b) this is a bug fix, or (c) this is a chore.
  • The title of the PR is (a) conventional, (b) states what changed, and (c) suffixes the related issues number. E.g. "fix(controller): Updates such and such. Fixes #1234".
  • I've signed my commits with DCO
  • I have written unit and/or e2e tests for my change. PRs without these are unlikely to be merged.
  • My builds are green. Try syncing with master if they are not.
  • My organization is added to USERS.md.

Signed-off-by: Rohit Agrawal [email protected]

@codecov
Copy link

codecov bot commented Dec 23, 2021

Codecov Report

Merging #1729 (ee91d9c) into master (a3477cf) will increase coverage by 0.01%.
The diff coverage is 91.42%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1729      +/-   ##
==========================================
+ Coverage   82.02%   82.04%   +0.01%     
==========================================
  Files         116      116              
  Lines       16096    16126      +30     
==========================================
+ Hits        13203    13230      +27     
- Misses       2218     2220       +2     
- Partials      675      676       +1     
Impacted Files Coverage Δ
utils/analysis/helpers.go 78.08% <81.25%> (+0.14%) ⬆️
analysis/analysis.go 86.09% <100.00%> (+0.35%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a3477cf...ee91d9c. Read the comment docs.

@agrawroh agrawroh requested a review from jessesuen December 23, 2021 08:56
@agrawroh agrawroh added the ready-for-review Ready for final review label Dec 23, 2021
@sonarcloud
Copy link

sonarcloud bot commented Dec 23, 2021

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 5 Code Smells

No Coverage information No Coverage information
7.2% 7.2% Duplication

Copy link
Member

@jessesuen jessesuen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks mostly good but I do not think the spec.measurementRetention is honored when specified from an AnalysisTemplate.

I think a change to utils/analysis/helpers.go::NewAnalysisRunFromTemplates needs to be added similar to the dryRun feature.

@agrawroh
Copy link
Member Author

agrawroh commented Jan 11, 2022

This looks mostly good but I do not think the spec.measurementRetention is honored when specified from an AnalysisTemplate.

I think a change to utils/analysis/helpers.go::NewAnalysisRunFromTemplates needs to be added similar to the dryRun feature.

@jessesuen Yes, I think we need to add it to NewAnalysisRunFromTemplates for the analysis being created from Rollouts/Experiments and I was thinking of doing it in a follow-up PR. AnalysisTemplate itself should work fine with the change we have here.

If you think that we should just include all the changes for Rollouts/Experiments in the same PR then I can add everything to this PR.

@jessesuen
Copy link
Member

If you think that we should just include all the changes for Rollouts/Experiments in the same PR then I can add everything to this PR.

I am okay with a follow-up PR that takes advantage of the feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready-for-review Ready for final review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants