Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Separate derivation for haskell tests? #9281

Closed
bennofs opened this issue Aug 16, 2015 · 13 comments
Closed

Separate derivation for haskell tests? #9281

bennofs opened this issue Aug 16, 2015 · 13 comments
Labels
0.kind: enhancement Add something new 0.kind: question Requests for a specific question to be answered 6.topic: haskell

Comments

@bennofs
Copy link
Contributor

bennofs commented Aug 16, 2015

Haskell library tests commonly create cycles (example: QuickCheck's tests depend on test-framework, while test-framework depends on QuickCheck as a library). As a possible solution, we could have a separate derivation just for the tests (and perhaps also for benchmarks), which would allow different dependencies for the tests compared to the library itself. The stackage curator tool works similarly, it builds the tests separatly.

Thoughts?
/cc @peti

@peti peti added 0.kind: enhancement Add something new 0.kind: question Requests for a specific question to be answered 6.topic: haskell labels Aug 16, 2015
@ryantrinkle
Copy link
Contributor

+1

@peti
Copy link
Member

peti commented Aug 19, 2015

How would this work in practice? Right now, tests are run as part of every build, so it's basically impossible to use a package that doesn't pass its own test suite. In cases where this requirement is too strict, we can explicitly disable the tests for individual packages. When we do that, however, we almost always report the issue upstream, too, which has been recognized by several package authors as a great service for the Haskell community. If tests are no longer part of the build, then it seems difficult to keep that quality standard the way it is now, no? Specifically, I'm wondering:

  1. If tests are separate derivations, when will they'll be run?
  2. How do we run the tests without re-building the entire package?

@ryantrinkle
Copy link
Contributor

My main concern here is that some test suites are very slow, so even if I might want to run the tests eventually, I don't want development to be blocked waiting for them. I thought that might be implemented by providing a ".checked" attribute on each Haskell derivation, which would work the same as the underlying derivation, but would require passing the test suite (unless doCheck = false) in order to build. It would need to depend on the .checked versions of all its dependencies as well. Also, it would be nice if it kept the test suite output around somewhere.

I don't know of a way to do this without still building the test suite along with the normal package. However, I think that deferring actually running it should be doable.

@bennofs
Copy link
Contributor Author

bennofs commented Aug 19, 2015

@peti for the first point, we could invert the relationship and leave the current derivation with tests enabled, but add a special build attribute that will contain the library, but without running any tests. build would only depend on build of dependencies, will the derivation itself will depend on the other packages + it's own build.

As for the second point, that'll be a problem. Even running the tests separately as @ryantrinkle proposes has a problem, since even just running the tests is difficult without cabal support. The only way I see to solve this is to have build produce an out path of the working directory of the cabal build, so that we can still run cabal test. That has it's own problems, however:

  1. More store paths
  2. Increased disk space usage, since all the temporary objects will be in the store too (they can be garbage-collected after builds have completed though)

Perhaps we can find a way to have cabal use an already-installed version of a library to compile and run the tests? If that feature is not yet supported, it would be useful to have, so maybe it can be added. But before this happens, I see not good way to implement this :-(

@peti
Copy link
Member

peti commented Aug 19, 2015

@ryantrinkle,

My main concern here is that some test suites are very slow, so even if I might want to run the tests eventually, I don't want development to be blocked waiting for them. I thought that might be implemented by providing a ".checked" attribute on each Haskell derivation.

Would the following overrides work for you?

{
  packageOverrides = super: let self = super.pkgs; in {

    haskellPackages = self.haskellPackages.override {
      overrides = self: super: {
        mkDerivation = args: super.mkDerivation (args // { doCheck = false; });
      };
    };

    haskellPackages_checked = super.haskellPackages;

  };
}

@ryantrinkle
Copy link
Contributor

@peti That would work for development, although it would mean that I'd need to do a full rebuild when transitioning from development to production, whereas the .checked attribute I was envisioning would simply run all the tests that hadn't been run yet.

@ryantrinkle
Copy link
Contributor

@bennofs Perhaps instead of copying the working directory, it can simply re-unpack the sdist before running tests? I suppose this will break if the package produces things during the build phase that need to be used during test, but I'd be a little surprised if that's very common.

@bennofs
Copy link
Contributor Author

bennofs commented Aug 19, 2015

@ryantrinkle That will recompile the whole package to run tests, no?

@ryantrinkle
Copy link
Contributor

@bennofs I think tests can be installed along with the package, in which case we can just run them. The sdist unpacking may not be necessary, but it would cover cases where the tests refer to data files and such.

@bennofs
Copy link
Contributor Author

bennofs commented Aug 19, 2015

@ryantrinkle the problem is detecting which of the installed executables are tests. Also, does this work for the detailed test type?

@peti
Copy link
Member

peti commented Aug 20, 2015

Personally, I don't see any reliable way to build tests now but execute them later. That doesn't mean it can't be done -- it's just that I don't know how to.

@ryantrinkle
Copy link
Contributor

@peti I asked around, and I think you're right. Maybe if that functionality gets added to cabal, we can revisit this.

@grahamc
Copy link
Member

grahamc commented Apr 15, 2016

In that case, should we close this for now?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
0.kind: enhancement Add something new 0.kind: question Requests for a specific question to be answered 6.topic: haskell
Projects
None yet
Development

No branches or pull requests

4 participants