-
-
Notifications
You must be signed in to change notification settings - Fork 14.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Separate derivation for haskell tests? #9281
Comments
+1 |
How would this work in practice? Right now, tests are run as part of every build, so it's basically impossible to use a package that doesn't pass its own test suite. In cases where this requirement is too strict, we can explicitly disable the tests for individual packages. When we do that, however, we almost always report the issue upstream, too, which has been recognized by several package authors as a great service for the Haskell community. If tests are no longer part of the build, then it seems difficult to keep that quality standard the way it is now, no? Specifically, I'm wondering:
|
My main concern here is that some test suites are very slow, so even if I might want to run the tests eventually, I don't want development to be blocked waiting for them. I thought that might be implemented by providing a ".checked" attribute on each Haskell derivation, which would work the same as the underlying derivation, but would require passing the test suite (unless doCheck = false) in order to build. It would need to depend on the .checked versions of all its dependencies as well. Also, it would be nice if it kept the test suite output around somewhere. I don't know of a way to do this without still building the test suite along with the normal package. However, I think that deferring actually running it should be doable. |
@peti for the first point, we could invert the relationship and leave the current derivation with tests enabled, but add a special As for the second point, that'll be a problem. Even running the tests separately as @ryantrinkle proposes has a problem, since even just running the tests is difficult without cabal support. The only way I see to solve this is to have
Perhaps we can find a way to have cabal use an already-installed version of a library to compile and run the tests? If that feature is not yet supported, it would be useful to have, so maybe it can be added. But before this happens, I see not good way to implement this :-( |
Would the following overrides work for you?
|
@peti That would work for development, although it would mean that I'd need to do a full rebuild when transitioning from development to production, whereas the .checked attribute I was envisioning would simply run all the tests that hadn't been run yet. |
@bennofs Perhaps instead of copying the working directory, it can simply re-unpack the sdist before running tests? I suppose this will break if the package produces things during the build phase that need to be used during test, but I'd be a little surprised if that's very common. |
@ryantrinkle That will recompile the whole package to run tests, no? |
@bennofs I think tests can be installed along with the package, in which case we can just run them. The sdist unpacking may not be necessary, but it would cover cases where the tests refer to data files and such. |
@ryantrinkle the problem is detecting which of the installed executables are tests. Also, does this work for the |
Personally, I don't see any reliable way to build tests now but execute them later. That doesn't mean it can't be done -- it's just that I don't know how to. |
@peti I asked around, and I think you're right. Maybe if that functionality gets added to cabal, we can revisit this. |
In that case, should we close this for now? |
Haskell library tests commonly create cycles (example: QuickCheck's tests depend on test-framework, while test-framework depends on QuickCheck as a library). As a possible solution, we could have a separate derivation just for the tests (and perhaps also for benchmarks), which would allow different dependencies for the tests compared to the library itself. The stackage curator tool works similarly, it builds the tests separatly.
Thoughts?
/cc @peti
The text was updated successfully, but these errors were encountered: