-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DO NOT MERGE] trigger full rebuild #11502
Conversation
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
All the failures on osx and Linux are not due to msys2 problems... |
Yes, but the previous thing that kept causing this PR to hang the builders forever was |
The boost ARM ones should be fixed by #11545 |
|
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
sigslot:x64-osx failure doesn't repro locally, and didn't see this issue in latest CI testing status. |
/azp run |
Azure Pipelines successfully started running 1 pipeline(s). |
…oft#11839) * [vcpkg] Remove do-nothing Set-Content from Windows azure-pipelines.yml. * [vcpkg] Fix OSX CI by ensuring the downloads directory exists in advance, and extract common command line parameters with powershell splatting. * [tensorflow-cc] Prevent hang building tensorflow-cc asking to configure iOS. * Skip ignition-msgs5:x64-osx
Co-authored-by: Robert Schumacher <[email protected]>
* [Arrow] Update to 0.17.1 * Remove arrow:x64-linux=fail from ci.baseline.txt. Add explicit tool dependencies on Flex and Bison for Linux and OSX. * Revert arrow dependency on Flex/Bison, it's Thrift that needs them and its portfile is already fine. * Use vcpkg_fail_port_install(ON_ARCH x86 arm arm64) instead of custom check. Remove thrift:x64-osx=fail from ci.baseline.txt (we know arrow depends on it, and arrow:x64-osx has been shown to work in 3rd party project). * Disable using pkg-config files to locate dependencies in arrow This is incompatible with vcpkg as these files refer to paths in the packages directory rather than the installed directory, so this only works if the packages haven't been cleaned. * Mark thrift:x64-osx as still failing until a proper solution for Bison can be found. * Update ports/arrow/portfile.cmake Co-authored-by: Adam Reeve <[email protected]> Co-authored-by: NancyLi1013 <[email protected]>
22909dd
to
76d45d5
Compare
I still see a regression on msix on x86-windows. |
@cenit That pipeline test case has been deprecated. |
@JackBoosY as you can see, we are still far from a green checkmark on the baseline... unfortunately
?? this PR is perfectly in sync with master, only triggering a full port-tree rebuild |
@cenit Looks like more and more regressions appear, and I'm fixing them. |
@cenit What are you actually trying to accomplish with this? All this is doing is showing that when you run 1300 separate build systems, the probability of any one of them having flaky behavior approaches 1. And in so doing is burning a ton of compute time and wasted cache storage space. We have the binary caching system precisely because that is necessary to ever make any progress with a system like this. |
In particular almost all the recent ones are failures to download the sources to build from a third party server which is absolutely not a bug in vcpkg. |
I was triggering this at the beginning because I had tons of regressions in my opencv PR, all of them not due to my modifications (which on the other hand were extensive and required careful checks). |
I'm not saying that necessarily. This PR highlights that we need some process in place where we monitor for regressions caused by overall tool changes; we should probably be running full rebuilds at least once per week and more likely once per day, and publishing that information for others to consume here. It's just that that being a PR elevates a normal process into a 'fire drill' because someone's blocked because it's a PR :)
It doesn't matter how official the triplet is, the sources are hosted by each third party associated with a given package. For example if SourceForge chooses a bad mirror for a given run you get a whole bunch of failures, none of which are really port bugs. |
That's what I meant with "caching mechanism for original sources". Ok thanks for clarifying your points. Of course an official mechanism to keep the tree in shape would be terrific and of course much better than a PR destined to be closed in the end. |
I'm going to discuss changing our CI builds to never run with binary caching enabled, only PR builds, at our next discussion meeting. (I want to make sure the team is OK with the resulting lower quality of service of PR validation since it's more work for the build fleet given that we have to stay under a core count quota, and the resulting increased expense of keeping the VMs alive longer, rather than making a change like that unilaterally) |
I'd say that a reasonable compromise would be to expand hash coverage, as to say what declares a binary cache compatible or not. I mean, we should not use just one "outsider" cmake script (target fix is all but important during port build), but all "fundamental" cmake scripts that would change port file behaviour. |
We already do things like that.
Compiler is being worked on for that to enable the binary caching stuff. CMake and ninja versions are already included. The issues this PR have found have been:
and I don't believe there's anything we can do to the caching system to truly address such problems. We can mitigate the first 2 in CI by caching the downloaded sources. The third we can't do much other than skipping such flaky ports; we can't really be in the business of fixing everyone's build race in ~1300 projects :). We could theoretically mitigate the next by doing increasingly exhaustive rebuild schemes (e.g. build each port with everything installed except its dependencies, build each port with only its dependencies and remove unrelated ports after each build, etc.) but that gets expensive in terms of compute time fast. |
@BillyONeal it’s definitely a bold step in the right direction! We can close this one, sure. The main goal here was to highlight the problem, and it has been addressed both with port fixes and infrastructure fixes. |
Describe the pull request
Due to infinite number of regressions in #11130 I fear that something slipped in, breaking many ports. CMake 3.17?
Here I just want to test in CI what happens triggering a full rebuild...