Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Manual pypy37 migration #155

Merged
merged 2 commits into from
Jan 12, 2021
Merged

Manual pypy37 migration #155

merged 2 commits into from
Jan 12, 2021

Conversation

h-vetinari
Copy link
Member

There's some weird cycles in the dependency graph that make the migrator think that scipy depends on pandas and psycopg2, among others, see regro/cf-scripts#1331

I verified that all dependencies have been built (hopefully not overlooking anything), so raising this to break some of these cycles.

@conda-forge-linter
Copy link

Hi! This is the friendly automated conda-forge-linting service.

I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.

@beckermr
Copy link
Member

Thank you @h-vetinari!

@h-vetinari
Copy link
Member Author

The last drone job timed out - I've seen this pattern a lot in the blas-variant PRs: 2-3 drone jobs go through in 25-30min, the ones beyond that get less resources(?) and then ultimately fail at the 60min timeout (somewhere between finishing the build and the middle of the test suite). Let's see if following https://github.com/scipy/scipy/blob/v1.6.0/setup.py#L234-L237 has any effect.

@beckermr
Copy link
Member

There are two different machines that drone runs jobs on. One is faster than the other I think? @isuruf has details

@h-vetinari
Copy link
Member Author

@beckermr: There are two different machines that drone runs jobs on. One is faster than the other I think? @isuruf has details

How much control do we have over this process? If the timeout were 90min or so, the build would very likely pass even on the slow machines (instead of retrying it all the time and spending much more CI time in total).

@beckermr
Copy link
Member

How much control do we have over this process? If the timeout were 90min or so, the build would very likely pass even on the slow machines (instead of retrying it all the time and spending much more CI time in total).

None. Yay!

@h-vetinari
Copy link
Member Author

None. Yay!

Then I suggest to merge as is and manually restart the drone build once or twice until all variants get built eventually.

@beckermr beckermr added the automerge Merge the PR when CI passes label Jan 11, 2021
@beckermr
Copy link
Member

OK. Let's let it get as far as it can to make sure the tests pass. Then we can merge.

@h-vetinari
Copy link
Member Author

OK. Let's let it get as far as it can to make sure the tests pass. Then we can merge.

I just removed the patch:

commit 07dd65932da4c773990bb6480503724ba78b74a4
Author: H. Vetinari <[email protected]>
Date:   Mon Jan 11 20:29:17 2021 +0100

    try upping build parallelism

diff --git a/recipe/build.sh b/recipe/build.sh
index 2ebd6d5d5..9cf1c5f4a 100644
--- a/recipe/build.sh
+++ b/recipe/build.sh
@@ -4,6 +4,7 @@
 # can have a G77 ABI (currently only MKL)
 export SCIPY_USE_G77_ABI_WRAPPER=1

+set NPY_NUM_BUILD_JOBS=${CPU_COUNT}
 if [[ "$python_impl" == "pypy" && "$target_platform" == "linux-ppc64le" ]]; then
     $PYTHON setup.py install --single-version-externally-managed --record=record.txt
 else

The pipeline with this exact state had already run through before, with everything but one aarch-build passing. Proof: compare first commit in "merge xxx into yyy" before (clickable from here), and after (clickable from here)

I say this because aarch builds will almost certainly fail again, and so the automerge label will be useless. Which of the 4 aarch jobs fails is pretty random, so IMO this can be handled after merging by restarting until the missing job also runs through on the faster machine.

@github-actions
Copy link
Contributor

Hi! This is the friendly conda-forge automerge bot!

I considered the following status checks when analyzing this PR:

  • linter: passed
  • drone: failed
  • travis: passed
  • azure: passed

Thus the PR was not passing and not merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
automerge Merge the PR when CI passes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants