Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Output check/threshold results in a machine-readable unit test format to publish test results in CI #1120

Closed
marlukjanska opened this issue Aug 23, 2019 · 5 comments · Fixed by grafana/jslib.k6.io#20
Assignees
Labels
enhancement evaluation needed proposal needs to be validated or tested before fully implementing it in k6 high prio ux
Milestone

Comments

@marlukjanska
Copy link

We're setting up k6 in our Azure DevOps CI pipeline which allows publishing predefined unit test formats results: https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/test/publish-test-results?view=azure-devops&tabs=yaml

So I was thinking of adding on teardown function to write out those test results in JUnit result format like here.

Having said that - was wondering if that would be the best approach or maybe something like this is already available?

Thanks!

@mstoykov
Copy link
Contributor

As mentioned in #666 (:smiling_imp:) , k6 currently doesn't have functionality to write to file. And there are still no plans to add such for the same portability, security and scalability reasons.
Additionally as mentioned in #351 there is no way to get if the thresholds passed of didn't from inside the test or any data from the metrics as well.

You can possibly use the json output or the csv output after we merge #1067 , and than do some processing on the tests for example with jq and than have it output and transform it to junit results, but this in itself seems like a lot of work and depending on how big your tests are you might need to wait for the #1114 in order to at least have gzip compression when writing the json so you don't run out of disk space.

IMO the best idea is for us to return the json report ( #355 ) and add thresholds there. There have been other people asking for this and it has been climbing in the priority list. I am not so certain that we will directly support JUnit as ... most of the output won't be compatible but maybe the checks and thresholds are a good candidate for a separate JUnit report that is just those. Looking at the supported formats all of them are popular but JUnit might still be popular-er and be the best choice for us.

@na--
Copy link
Member

na-- commented Aug 26, 2019

Yeah, it makes sense to me to have the results from any thresholds in the future end-of-test machine-readable report that k6 will produce. We still haven't discussed the exact format of that report, and even though I doubt we'll be able to use a standardized format for this, some research won't hurt...

In any case, a custom k6 json format for all k6 data and a separate standardized format (JUnit, or more likely - XUnit?) for pass/fail criteria (i.e. thresholds and maybe checks?), so that can automatically be parsed by CI systems also makes a lot of sense to me...

@na-- na-- changed the title [Question] Checks output in AzureDevops any unit test format to Publish test results in CI Output check/threshold results in a machine-readable unit test format to publish test results in CI Aug 27, 2019
@na-- na-- added this to the v0.26.0 milestone Aug 27, 2019
@na-- na-- modified the milestones: v0.26.0, v0.27.0 Oct 16, 2019
@na-- na-- modified the milestones: v0.27.0, v0.26.0 Nov 11, 2019
@na-- na-- added the evaluation needed proposal needs to be validated or tested before fully implementing it in k6 label May 21, 2020
@na-- na-- modified the milestones: v0.27.0, v1.0.0 May 21, 2020
@simskij
Copy link
Contributor

simskij commented Sep 4, 2020

Some things I think we should consider:

  • JUnit XML is the most commonly supported output format for CI test report visualisation, so I'd be all for adding it as an output format.
  • There is a community project already, but having it natively in k6 would definitely make a lot of sense, and likely be less error-prone.
  • Having the option to decide whether to report failed checks and thresholds as warnings or errors would be really valuable, as it would allow the user to decide whether a failed k6 test should break the build or not. With that said, I think both should be available as possible build-breakers.

@na--
Copy link
Member

na-- commented Sep 4, 2020

I probably should create a separate issue for that, but it's likely that all sorts of "k6 generates some sort of summary at the end of a test run" will be implemented with templates. The idea was originally discussed in #1319 (comment), but it's generally applicable.

Instead of k6 having to wrangle the same pieces of data in multiple different formats (end-of-test text summary, end-of-test JSON summary, JUnit XML, text executive report, etc.), we should just support templates. Then, we can support the common "report" cases via some built-in templates, but also allow users to expose the k6 results however they want to, with custom templates.

@na-- na-- self-assigned this Dec 9, 2020
@na-- na-- modified the milestones: v1.0.0, v0.30.0 Dec 9, 2020
@na--
Copy link
Member

na-- commented Jan 13, 2021

Now that #1768 is merged, this should be easy to do. @mstoykov already (#1768 (comment)) made a prototype: https://gist.github.com/MStoykov/5e052293cc1bbdd11284a2cc5a90a194

We'll close the issue when we polish that prototype and upload it to https://jslib.k6.io

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement evaluation needed proposal needs to be validated or tested before fully implementing it in k6 high prio ux
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants