-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: merge testmon databases #77
Comments
No this is not possible yet. One thing which should work is sharing the .testmondata between the parallel runs (if they have identical home directory they write to the same .testmondata) This doesn't work across machines, it all has to be local (or maybe shared network volume) |
Yea, that won't work for us. Our test builds are across the network. I just opened up a .testmondata file and looked at it. All paths in there are relative. Can't we just select all rows from one database and dump into another one? |
Sorry for the delay. Yes, the manual merge would be possible as long as the executed nodes are exclusive. There is a unique key on (node_variant, node_name) in the node_file table. What is the 'UI' you would like here? coverage data is also doable of course. Thanks for writing down the feedback, I do think about these issues even though I don't implement solutions yet. |
That would be a great UI. And having some kind of coverage output would be nice since we can then stop running a separate coverage job that is pretty redundant.
|
Maybe this is possible already.. I tried searching for it but couldn't find anything.
It would be great if one could merge several .testmondata files. The use case I have is that our CI environment builds in parallel so we now have a separate serial run of the entire test suite just to generate .testmondata. This seems wasteful. It would be much nicer if we could build testmondata on each sub-build and the merge the result.
On a related note: we also have a separate coverage job, which also seems wasteful. If we could extract coverage data in the same run that would be nice.
The text was updated successfully, but these errors were encountered: