-
-
Notifications
You must be signed in to change notification settings - Fork 277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Packages that can be noarch #1840
Comments
Would be amazing to have pytest be noarch! hypothesis might be a candidate as well. It recently got made yesarch again, but would presumably be noarch-capable if the django workaround for backports.zoneinfo arrives. |
It's worth noting that with Perhaps if we wanted to simplify that, we could turn the backport packages it depends on into |
Desert: conda-forge/desert-feedstock#8 |
And some more Also, the "CircleCI Pipeline" error pops up on all of them. Is there any way to remove this? |
I think this package could be made a noarch platform, but would require a platform for every package. Basically, it is a C library, that is loaded with ctypes into a python package: |
I made some queries on cs.github.com and found like... 100 feedstocks we could noarchify. I haven't reviewed all cases though, but I am more or less confident most would be eligible. Check the first message! |
Slightly related project that popped up a couple of times but was never really taken up. If we can create one source of truth for packages where the selectors can be ignored, we could add them all to Grayskull, and I could just run Grayskull on that list. There is already https://github.com/conda-incubator/grayskull/blob/6955333ee01f83ba6ae6e8dc76ce1f576e7c762e/grayskull/strategy/config.yaml#L465 and https://conda-forge.org/docs/maintainer/knowledge_base.html#non-version-specific-python-packages but it's not connected. |
I can fix the circle stuff. We can delete the webhooks. |
Folks, how are we handling the old jupyter extensions that have a pre/post script? cc @xhochy |
cc @bollwyvl |
Poetry is already noarch |
Looks like that just happened today. Likely the list was made before today and the box wasn't checked yet. Have now checked it |
Haven't looked, but we can likely emit appropriate |
@hmaarrfk, TomoPy is like this. I have one noarch python package which depends on platform specific packages which contain the shared libraries. Could have separate feedstocks, but I just have one and build the same noarch package every time. |
@beckermr Is it enough to delete the webhooks at e.g. https://github.com/conda-forge/curtsies-feedstock/settings/hooks ? |
Is there any way someone outside of core can do that? I have so many packages that have that, pinging core each time is going to spam your inboxes... |
Yes deleting the webhooks is fine. @BastianZim If you want to write an admin migration to do that, feel free at conda-forge/admin-migrations. |
I'm OK if you ping me. Every migration we find tons of packages that can be noarch. Last one I converted dozens and it will be easier to merge someone else PR than doing it myself.
That would be quite complicated b/c the new semi-noarch recipes are not a one-size fits all. |
Ahh true, forgot that. Is there any guideline on that? I've never written one. |
Thank you! But the CircleCI stuff also appears in my normal feedstocks so it would be quite a lot...?
I think this is just about removing the CircleCI hook? |
Ah. Then it is definitely worth a try with a migrator. |
Rest would be nice but probably impossible. Although, if we have a list of empty packages we can ignore (#1840 (comment)), I can run Grayskull on most and probably automate ~80% of the PRs to the point where they just need to be merged. |
@ocefpaf I am talking about an admin migration at conda-forge/admin-migrations, not a bot migration. |
These feedstocks are using MxP jobs when they could be using 2... (M = number of enabled architectures, between 3 and 6; P = number of Pythons, currently 3 or 4). This is 9 jobs at best, 24 at worst! |
|
Couldn't we do the same thing with |
Yes, sure, that will always work. But it's a simpler modification to replace just |
Maybe we can move |
I'm not sure that's compatible with work such as conda-forge/conda-forge-ci-setup-feedstock#210? Or would the suggestion be - for debugging purposes - to switch off noarch when preparing a PR? |
Have different thoughts on how to approach that as well, but that's a discussion for a different issue |
Just an FYI:
We appear to still have people a non-negligible amount of users with |
Wonder if we could just create those as arch packages and stick them under a special label (like |
@jakirkham, not a bad idea. |
Even if the uploads are blocked, we could include the recipe in the docs. It is probably a couple lines. Once built it should be installable from the local package cache. |
Do we need the actual packages or can we mock them in a repodata patch? Just an idea in case that's easier to implement. |
Thinking about this more I think it is on end users to build these packages themselves. Not all cases check simply for the existence of these packages. Some of them (like |
Is the following issue known: conda-forge/spyder-kernels-feedstock#87 |
Looking at what ipykernel does is probably relevant in the Spyder case. I
figure we would have heard if it wasn't working.
|
Is https://github.com/conda-forge/xcb-proto-feedstock a candidate? It seem like it puts things outside the python directory so I'm not sure if it would work. But the |
Maybe but you'd need packages for all the platforms (six so far?). |
By dropping support for python<3.8 or by using
noarch_platforms
as in #1839Here we collect some packages. PRs are welcome to help
noarch: python
sh-feedstock#23 )Harder ones
@jaimergp: Feedstocks that have a
py{<,<=}3{6,7,8}
selector without compiler dependencies, still notnoarch
and nopython_impl
mentions (cs.github.com query). Search results say 110+ feedstocks but you can only retrieve 100 at a time so here we are. Some of them might not apply for other reasons( could not review all of them).cc @conda-forge/core
The text was updated successfully, but these errors were encountered: