-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce versionInfo in metrics CRD #933
Comments
This seems like a good idea. But it means metrics are specific to versions. Where version is generated in a CI pipeline, the version may change on every use. Won't I have to define new metrics for every experiment? |
Yes and no. If metrics are collected with consistent labels, example Otherwise, yes. Create metrics for an experiment everytime you create a new experiment. Even in this situation, note that, in HelmX and SimpleX experiments, metrics can be templated as part of Helm templates alongside the experiment template, which again means, from the CI/CD perspective, the only change is to the values file. SimpleX, and HelmX are really the ways to simplify CI/CD with Iter8. Not raw experiments. So, the additional complexity of creating metrics for each experiment happens only if:
Without the above change, CI/CD still needs to populate versionInfo variables in the experiment object for each metric... |
Note: This issue is now subsumed and superseded by #934 |
Is your feature request related to a problem? Please describe the problem.
Currently, metric definitions are complicated due to two reasons
Describe the feature/solution you'd like
Alter the definition of custom metrics as follows ... there will now be a single placeholder called ${elapsedTime}, which has nothing to do with a version. Conceptually, this is a far more easier definition to follow than having to link between two different resources and define complex interpolation logic.
Alter the definition of mock metrics as follows ...
Note: no changes are needed for builtin metrics due to #912
Note:
versionInfo
for metrics will be a list ofversionDetail
objects. This object has three optional fields, namely,params
,body
, andvalue
. First two are for custom metrics, and the last one is for a mock metric.Note on validation: length of
versionInfo
needs to be the same, whereverversionInfo
lists appear.Note on validation: if
mock == true
, every version should have avalue
.Will this feature/solution bring new benefits for Iter8 users? What are they?
The above change effectively decouples metrics and experiment resources, apart from the basic requirement that the number of versions in the metrics object should equal the number of versions in the experiment object.
Making metrics and experiment objects self-contained in the above ways simplifies definition of both.
Does this issue require a design doc/discussion? If there is a link to the design document/discussions, please provide it below.
Yes, please see above.
How will this feature be tested?
How will this feature be documented?
The text was updated successfully, but these errors were encountered: