Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python: run tests in separate derivation #26400

Open
FRidh opened this issue Jun 5, 2017 · 6 comments
Open

Python: run tests in separate derivation #26400

FRidh opened this issue Jun 5, 2017 · 6 comments
Labels
0.kind: enhancement Add something new 6.topic: python 6.topic: testing Tooling for automated testing of packages and modules 9.needs: community feedback

Comments

@FRidh
Copy link
Member

FRidh commented Jun 5, 2017

Issue description

Currently we run tests directly after a package has been build. Unfortunately, we need to disable tests regularly to avoid circular dependencies caused by test dependencies.

Running tests in a separate derivation would:

  • allow us to test all packages because circular dependencies are no longer a issue;
  • reduce build time of the actual packages because tests are ran outside of them;
  • add a bit of overhead because there's now two derivations per package;
  • allow actual integration testing, because we would build a new env in which we perform the testing.

I propose we then add a tested attribute to Python packages that corresponds to the second derivation. This is typically the attribute to use for CI as well.

@copumpkin
Copy link
Member

I've often wanted this for other types of derivations, too, but I'd definitely support this.

@anderspapitto
Copy link
Contributor

One potential snag - there are probably packages where some tests rely on access to temporary build files.
In those cases, you'd either have to disable any such tests, or alternatively the tested attribute could redo the whole build.

@peterhoeg
Copy link
Member

Earlier conversation #9281.

@c0bw3b c0bw3b added the 6.topic: testing Tooling for automated testing of packages and modules label Apr 28, 2019
@deliciouslytyped
Copy link
Contributor

deliciouslytyped commented Jul 28, 2019

It's also very annoying during packaging / nix package development when a test fails after an hour long build and you have to go through the whole process again just to work on fixing the tests.
(Ok, you can use nix-shell)

@stale
Copy link

stale bot commented Jun 2, 2020

Thank you for your contributions.

This has been automatically marked as stale because it has had no activity for 180 days.

If this is still important to you, we ask that you leave a comment below. Your comment can be as simple as "still important to me". This lets people see that at least one person still cares about this. Someone will have to do this at most twice a year if there is no other activity.

Here are suggestions that might help resolve this more quickly:

  1. Search for maintainers and people that previously touched the related code and @ mention them in a comment.
  2. Ask on the NixOS Discourse.
  3. Ask on the #nixos channel on irc.freenode.net.

@doronbehar
Copy link
Contributor

I don't think we should (by default) add such a tested attribute, or put the tests in a passthru.tests derivation. In fact, I think that building the tests in the same derivation has the advantage that it forces us to observe the failure, which in turn can warn us about failures also in dependent packages. The same reasoning also pushes me to advocate in transitioning from adding a passthru.tests.version to versionCheckHook. See also:

@stale stale bot removed the 2.status: stale https://github.com/NixOS/nixpkgs/blob/master/.github/STALE-BOT.md label Aug 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
0.kind: enhancement Add something new 6.topic: python 6.topic: testing Tooling for automated testing of packages and modules 9.needs: community feedback
Projects
None yet
Development

No branches or pull requests

7 participants