-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Threshold enhancements - custom exit codes per threshold #680
Comments
I think that the first part of the issue ("Respect thresholds in setup/teardown code") happens because at the moment k6 doesn't emit any metrics from Regarding the |
It might be a good idea to also allow edit: I renamed the issue, since the first part has already been released for a long time in k6 v0.22.0 (#678) |
Now that I think about it, what should happen if multiple thresholds fail? We can't really exit with the exit code of the top threshold, since the thresholds are defined as an unordered map. Maybe after #1443 / #1441 are done we can do that, but if we implement this issue before them, we can only use the generic edit: and the actual reserved ranges for user-specified exit codes should be reserved and documented as a part of #870 before this as well |
Hey @dgzlopes, it would be good to have your opinion here. |
Respect thresholds in setup/teardown code
The docs for thresholds say
However, thresholds don't seem to work there:
moving code that adds to a metric in the default function/VU code (triggers a threshold correctly and prints the metric value in the results output) to the setup code leads to no aborting on a threshold failure or printing of the metric in the output.
This might be intentional, in that custom metrics only get collected at the end of a VU iteration, but it still feels like this should be possible at the end of the setup/teardown function as well.
Custom exit code setting per threshold
When running k6 in an CI environment, I would like to be able to catch the type of test failure.
Currently there is either fail or no fail (non-zero exit code or exit code zero). I know I could add failure messages in the form of tags to the result data and then parse the data after the test, but since a dynamic exit code is already a thing, it would feel natural to be able to influence the value of the non-zero exit code.
With that I could label the testrun appropriately in my CI tool - e.g. "p90 exceeded SLA" when a Trend threshold failed, "error rate exceeded 10%" when a Rate threshold failed.
Something like:
With that, when the error rate metric exceeds the 0.1, the test will abort and k6 will exit with code 2.
The text was updated successfully, but these errors were encountered: