-
-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: add strong_constrains
on sysroot_{{ target_platform }}
?
#63
Comments
Although I sympathize. I don't like this condition getting added to basically all compiled packages as a dependency. |
This also makes sense only for packages that end up in |
It would just be a constraint, not a dependency. I thought that kind of metadata was the point of the stdlib-infrastructure, i.e. it's obviously going to affect a lot of packages. Should we not strive for correct metadata? I guess the only mitigating factor is that we'll be moving to 2.17 globally soon, so it'll become the "ambient" expectation again.
AFAICT we cannot know in advance whether a package is going to be used in a build environment, so then the alternative is people just run into the missing symbol errors, and document how to fix them? |
Other than the test section, when will this ever help? |
Anything where the dynamic linker looks for symbols from the newer glibc, so in principle anything built against 2.17 that gets executed (either in build or test env)? Note that I said we can just document how to deal with the missing symbols, so this isn't black-and-white to me - I asked because I don't see the issue with adding the run-constraint, even if it gets added very broadly. |
Can you give an example recipe? I still don't understand. |
In libcxx we failed during the testing phase while trying to load My point (to the degree that I'm not missing something) was that loading a library is not restricted to the test phase. If there's an executable called in the build environment that needs to load something (e.g. AFAICT we'll be in a similar situation with conda_2_28 / alma 8. Our images will be cos7-based, but people will eventually start depending on newer glibc. To get this to work without the constraint, dependent feedstocks then either need to add the sysroot manually or change the image version. I like the constraint because it minimizes changes in dependent feedstocks:
(there's a corner case where bumping the image version is still necessary, but I still think we'd cover a large percentage with the constraint.) |
Whether this gets added by sysroot's run_exports or not, should packages that appear to have this requirement add the I guess the constraint won't have the desired effect when the package is not in How is a package meant to communicate that "to link against me, you must build with c_stdlib >= X"? Which appears to be the case in practice for openmpi. Or are all recipes that encounter missing glibc symbols linking openmpi doing something wrong? |
I'm not sure this is the best Issue to discuss this, but is there a way for the "weighed down" c_stdlib_version to affect conda-forge infrastructure / conda-build, but not runtime installs? It is a bit surprising that |
Getting rid of the sysroot hack is indeed one of the goals of the whole stdlib effort. I'm not directly involved but I think it'll come after the switch to 2.17 |
Nice. I think that will solve the runtime compilation issues, so no other action on that front is probably required. |
Yeah to be clear, once we remove the hacks @h-vetinari is referring to, conda will install the latest sysroot when a compiler is installed. That will be w/e the latest is at the time, which right now is 2.28 IIRC. It won't mean we ship the compiler without the sysroot. |
Yeah, that's exactly what I would hope to happen. Looking forward to it! |
I'm trying to follow what's happened this summer, and it seems the transition to 2.17 has happened, but the weighing down to the oldest supported sysroot is still happening. I see comments about 'breaking non-conda-build' in conda-forge/conda-forge-pinning-feedstock#6070 , but that's exactly the situation I want to fix because weighing down outside conda-build is what causes all the breakages in my experience. Is there a separate issue to track the removal of track_features on sysroot, or should I open one? Using 2.17 by default in conda-forge builds is fine, but it really seems like |
Right. I am confused about multiple issues and how they relate, so it has been tough for me to follow. AFAIK we have ~3 things going on
These are all somewhat related at a technical level since implementations of some effect the others. However there are I think some policy / convention issues in the mix too, especially for the last item. Thoughts? |
Yeah. I was working on that in conda-forge/conda-forge.github.io#2206 . So far, every problem I've encountered there has been caused by:
and either installing later sysroot or fixing ignored or missing $LDFLAGS (e.g. depending on gcc instead of c-compiler) fixed it. I think 2 and 3 are related, because going forward (IIUC), weighing down should only affect packages that don't use c_stdlib pinning (i.e. not conda-forge packages). I'd argue that latest sysroot is the right default choice for user runtime envs, and other build environments that want to pin down for portability should follow conda-forge's example and pin down c_stdlib explicitly. To me, that suggests that weighing down should go away if/when c_stdlib is a reasonable expectation for conda-forge packages, and latest available sysroot should become the default install. |
This piece of infrastructure largely precedes my involvement, but the understanding I had gotten was that we'd obviously like to get rid of the sysroot hack. When this was brought up in the PR you mention, Isuru pointed out some issues, though I believe they can be resolved (there's hasn't been an answer to my proposal there, so I'll summarize it below).
There's actually 4-5 things AFAIU:
That part is IMO the easiest - it's what (1b) The one problem with this is that the
If it's just for using the compilers, we can add the sysroot with our global baseline to the
I tend to agree with @minrk here that the newest sysroot suitable for a users system would actually be a good default. Anyone else can explicitly install a specific version The fourth point that Isuru brought up is that there are packages (like The feasibility of this approach depends a bit how many packages there are that would need such double variants (though note that it's not necessary to build twice; e.g. just one
At this point I believe this would be a good thing to do... |
Thanks. I don't really understand what's special about the r-base issue. I know R installations may compile things, but is this not the same "runtime compilation" situation we've already covered for mpicc, etc.? i.e. it would still be better for r-base to pull in latest sysroot by default for environments, and not the oldest sysroot. Or is that not what's happening? It is not true, as I understand it, that r-base needs to pin down sysroot except for building binaries to be redistributed. And I think at this point it is not the best experience to be preparing user environments for that task by default.
Okay, I'll do that and try to summarize/link where this has been spread out so far |
In conda-forge/libcxx-feedstock#131, I noticed that setting
c_stdlib_version
to 2.17 (and being in a cos7 image) still leads toif
sysroot_linux-64 2.17
is not available at test time.It seems reasonable to me that when a package indicates it requires
c_stdlib_version >=x
, that we shouldn't allow pulling in older sysroots when building against that package?conda-build has a similar feature like run-exports also for adding
run_constrained:
entries, which would do the job here IMO.Thoughts @beckermr @isuruf?
The text was updated successfully, but these errors were encountered: