-
Notifications
You must be signed in to change notification settings - Fork 17.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cmd/go: add a flag to report test compile time #33527
Comments
Thanks for the suggestion. Note that |
Are your test files so slow to compile that this actually makes a measurable difference in the optimal sharding configuration? Tests execution can take a long time, but I would assume that any reasonable test file would compile in less time that it takes to spin up a container or whatever you are using for sharding. |
@bcmills we spin up ~14 containers and shard our packages across these containers. Since the containers are brand new, there is no cache at this point. granted that between packages Also i tried running @ALTree as an example.
this means that compiling took 32 seconds, while testing took 26. in some cases we even have compiling taking ~20 seconds and tests taking ~100ms. This really skews our sharding. with our 14 containers, we seem to have some that finish in ~2 minutes while others go up to ~6 minutes |
Ah, I see. Thanks. These are certainly some extreme examples. |
Why not simply compile your tests first using |
@beoran i might be missing something, but if i build the test binary once, go test still seems to recompile. if you are suggesting running the test binary, how do i run only some packages? Also can I build a test binary for only some packages or am i forced to build one for all packages? @bradfitz was also looking at sharding tests, dunno how far he got. |
Yes, AFAIK go test will always recompile the test binary. If you don't want that, then you'll need a build sustem such as a makefile or Basel to only build when the sources have been updated. If you use go test to compile test executables, then you will obtain a test binary per package. I would use something like |
To expand on that: in normal operation, Since the only time we need to run the test is when the inputs have changed, there isn't much point in caching the resulting binary: it won't be run again until it needs to be rebuilt again anyway. |
@beoran so it seems like irrespective of building binaries, if this time was reported by |
@roopakv, if a shard can contain more than one package, the build time is going to be nonlinear anyway (but still monotonic) — and that's true for any compiled language, not just Go. If you want high-precision sharding, fundamentally you'll either need a very deep graph analysis (perhaps using the unstable |
@bcmills So i think im missing something. but say that gotest reported time per package including build instead of just time per run. My sharding would work perfectly. I sort of simulated this in a hacky manner to test, by simply finding the build time per package and adding it to the test run time and the sharding was great. Except i dont want to productionize my hack. I think it might be beneficial to get it into go test. LMK what i am missing |
@roopakv, the So the best it could do would be to report how long it took to build each individual dependency, but that still doesn't tell you how long it would take to build any given set of tests starting from scratch, because it doesn't tell you how much their dependencies overlap, or how much each dependency slows the builds for the others executed in parallel. |
@bcmills I guess that is fair! So i tried adding even if I can get that time it would be helpful. Also when running |
Not necessarily: the tests may import packages that are not needed by the non-test package variants. |
Correct. In fact, most of the cost is typically in the linking phase (after compiling). |
@bcmills I spent a bunch of time looking at build times using It sort of seems like our project has MANY packages, and even though the builds are cached, linking takes its own sweet time which means that that this is the biggest time sync. Is there any way that we can improve the time take to link? |
There is active work to speed up linking time. I'm not aware of any specific steps you can take. |
See in particular #32094, #14624, #12259, and https://golang.org/s/better-linker. |
Given #33527 (comment), I don't think it is feasible to report a meaningful “compile time” for tests, and one that is less meaningful does not seem worth the complexity to implement. We could potentially report the final link time per test, but that also depends on factors like the load on the machine, and at any rate you can already approximate that pretty well using For sharding, probably your best bet is to use some sort of more sophisticated feedback-driven controller, or a system explicitly designed for distributed incremental builds. |
I want to shard my go package tests across different containers based on the time each package takes to test + compile. Right now we have packages that run in ~20ms and some that take ~15 seconds. Go test only reports the time taken to run the test not compile them. So if I use circleCIs sharding functionality it drops the 15second test package on a container on its own, but that compiles pretty quickly. It however drops many packages that take 20ms on another container and since i dont give circle ci the time compilation takes my sharding will never be optimal.
if there was
go test --compile-time
which included the compile time in the rest run time reports my problem would be resolvedWhat version of Go are you using (
go version
)?Does this issue reproduce with the latest release?
yes
What operating system and processor architecture are you using (
go env
)?go env
OutputWhat did you do?
1.) Run go test on any package
2) Look at test timing.
What did you expect to see?
What did you see instead?
The text was updated successfully, but these errors were encountered: