-
-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SciPy v1.15.0 #284
SciPy v1.15.0 #284
Conversation
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
Raised an issue for 5eb8df4 / 36a062e: conda-forge/conda-smithy#2061 |
I cannot tell exactly what's going wrong here @ilayn, but note that we make it to the very last of the import testsscipy-feedstock/recipe/meta.yaml Lines 130 to 172 in 07e1f9f
In fact, we even do so twice (well, conda-build does a weird log duplication for the import tests; not 100% sure it's actually run twice). In any case, it seems like something is leaving an error code (or some other dirtied resource) behind so that conda-build sees some kind of failure at the end of the import tests. It's clear as mud to me what might be the cause. |
Whoa, that's really cryptic stuff. Regardless of the issue hats off to you for handling this chaos. I can only see one tiny sliver of hope that the failure in the azure pipelines stacktrace ends with
Also the scipy version a bit above in the package list says SciPy version 1.14.1 and local. So maybe a cache needs to be invalidated? If it's not that, I'm with you with the snorkel view about this. |
Unfortunately you're fishing for red herrings here. 😅 That's a regular message for failing tests, and the |
But it is indeed unusual to fail already at the import stage without any other info. During the pytest stage we normally get better errors. I guess that's also an option here - disable the import tests and see what happens... |
I think this is not scipy related but something else is broken in the toolchain. Looks like the .bat file is returning an error. |
But nevermind, I already saw that it is building so that's good enough for me. I don't want to cause more work for you. |
Unfortunately, it's 100% guaranteed to be related to the changes that happened between 1.14 and your PR. If you want I can test SciPy main and see if it works as well, but you see a CI run further up where 1.14 passed CI today. The fact that you see the failure in the bat script is just because that's the wrapper that executes the test suite. Unfortunately, it appears that the same kind of corruption also happens when we invoke pytest (probably during test collection, which already imports most of SciPy). It's a frustrating kind of error to run into and to debug, but it's very real. |
Hmm then I have no idea what it might be and more perplexing that all passing in the SciPy CI. Frustrating indeed |
Good news (for your PR at least). SciPy |
I commented there but here it is less noise I think, but the MSVC job of scipy is going through but that is using ifx. Is it a Flang problem possibility ? |
Always a possibility, but not my first suspicion. Could also be a VS2019 vs. 2022 issue. We'll see what the bisection yields |
Do you mind adding the very last commit from scipy/scipy#21553 just to test if it will fail again? Then I would know how to recreate the failing test locally. |
CI retriggered, which should pick up the newest commits from that branch. |
Thank you, fingers crossed. |
Apologies Axel, I'm losing focus these days. I missed a few places that randomization was done in the old style. Could you retrigger this if you can please? |
I mentioned this in the SciPy PR already, but just to allow you to move at your own speed: you can retrigger CI here yourself with the following magic incantation (and because the sources point to your branch, that will pick up whatever new commits are in there): @conda-forge-admin, please restart ci |
@conda-forge-admin, please restart ci |
@conda-forge-admin, please restart ci |
Congrats, bug squashed successfully! 😎 |
This hasn't been true since the switch to Meson for 1.26.x
{% if numpy is not defined %} | ||
{% set numpy = "2.0.0" %} | ||
{% set numpy = "2.2.1" %} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if this mattered for previous builds or not, I suspect not since numpy
is set centrally in conda-forge pinnings - but this change is more correct either way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It matters because we do the <2.{N+3}
handling here, and this doesn't work with the global pinning (which now does major-only).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The 1.14.1 package has this metadata:
- numpy <2.4
- numpy >=1.21,<3
- numpy >=1.23.5
So it's already correct for previous builds (yay) - apparently numpy
was defined in the conda-forge builds, or it would have said <2.3
.
Anyway, this change is still helpful.
package: | ||
name: scipy-split | ||
version: {{ version }} | ||
|
||
source: | ||
# The sdist distributed by scipy contains pre-compiled pythran sources, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that this was not true since the switch to Meson. We should probably switch back to sdist from PyPI, so we can avoid all this manual git submodule handling?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a bit annoying to update the hashes, but I do prefer building from source. I really don't want to pick up anything pre-cythonized or otherwise precompiled. If that's not the case anymore, I'm happy to switch back to the upstream tarballs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are no pre-generated sources except for the trivial capturing of the git hash that's being built from in version.py
(that one is necessary because it's VCS info). And that one is actually broken in the current recipe, switching to a proper sdist will fix it:
>>> scipy.version.git_revision
'Unknown'
The GitHub-generated tarballs should never be used for Python packages if it can be avoided, because they're broken in multiple ways - this missing versioning info will happen for many Python packages.
Could you please cherry pick Lucas' commit from #291? |
Sure, will do after letting the current run complete (it's almost done). |
or not, these aarch64 are crazy slow, 3h 17min and going. we may want to start skipping tests marked as slow - those are tested on native aarch64 CI machines upstream after all. |
We're already only running the fast tests, but emulation is simply slow. It's not a big issue though, because we don't build so often. Just push, no need to wait for the runs to finish |
This avoids having to keep track of all the git submodules manually, and it fixes a minor bug because VCS git hash info isn't exported correctly in the auto-generated tarballs from GitHub.
For conda-forge/conda-smithy#2060 & scipy/scipy#21553scipy/scipy#21561Testing for 1.15.0rc1