-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Determine regressions from target branch #85
Comments
So you mean building before and after? Yeah. That would be nice to have. |
Yes, essentially a "package diff" which told you what changed. |
That information can be cheaply obtained from hydra, in case hydra has attempted to build the old package before. |
Yea, but I don't really mind brute forcing with more CPU. Would you be willing to work with me if I created a PR that did create the package diffs? |
Also when targeting master, you're likely to get cached hits if they weren't broken already |
Yes. |
Yes. It is just an optimization, but there is no need to actually download it or to rebuild an already failed build. |
maybe when the 20.03 hype has died down I'll get around to this, going to focus on nixpkgs during the release cycle though.
I think it would be useful to determine that a regression could have happened. |
actually, maybe it would be good to take a break from constant PR reviews |
for the switch, are you good with e.g. |
Mhm. I also have don't have a good name yet, but if I would see the |
maybe |
|
I just noticed this, I think it's a nice touch:
|
This only works for local evaluation right now, though. |
For what its worth, that is one of the things I try to do with |
I would be more than okay with building the target branch's packages to see if any regression happened as part of a
nixpkgs-review
. Something to the affect ofNew successes, new failures, still failing
would be nice. Similar to hydra evaluations.My current workflow is to take all the packages that failed and run
nix build -f . --keep-going $@
. And then manually see which ones passed. This isn't too bad if it's just <10 failures, but some reviews (especially python packages), can have 20-60 failures, and it becomes very difficult to determine regressions.The text was updated successfully, but these errors were encountered: