-
Notifications
You must be signed in to change notification settings - Fork 557
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MPI: Experimental platform tag #4073
Conversation
L/LAMMPS/build_tarballs.jl
Outdated
# Dependencies that must be installed before this package can be built | ||
dependencies = [ | ||
Dependency(PackageSpec(name="CompilerSupportLibraries_jll")), | ||
Dependency(PackageSpec(name="MPItrampoline_jll"), compat="2"), | ||
Dependency(PackageSpec(name="MicrosoftMPI_jll")) | ||
# Dependency(PackageSpec(name="MPIPlatformTag")), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am debating if this should be MPI.jl
or a tiny single preference package.
62d9519
to
c26ee64
Compare
How does this work if you have another _jll which depends on LAMMPS_jll: would it automatically augment the platform? |
Each JLL chooses its own artifacts in isolation from eachother; if LibFoo_jll depends on LAMMPS_jll, if LAMMPS_jll chooses one based on a particular MPI backend, then if LibFoo_jll's own artifacts are similarly sharded across MPI backends, it must perform the same steps as LAMMPS_jll does. This is intentional; we don't want artifact selection to be involved in resolution. If you need to solve for resolution constraints, you should have multiple packages. You should only use artifact tags when you have identical (or nearly so) functionality that can be swapped out without changing any other package's compatibility bounds. As an analogy, If I instantiate a bunch of packages that have multiple artifacts that are split into micro-architectural optimized builds (e.g. There is a gray area here where you're coordinating the choice of backend/dialect across multiple JLLs. I think this is still okay, because we are essentially asserting that all MPI-supporting JLLs must obey a centralized MPI backend setting. This should work flawlessly, with the exception that if we have JLLs that hasn't been updated to pay attention to MPI tags, we may get a mixture. This may be fixable through intelligent compat bounds, to ensure that we only have MPI-tag-aware JLLs installed together. |
Replaced by #4540 |
Experiment with JuliaPackaging/BinaryBuilder.jl#1128 and JuliaPackaging/JLLWrappers.jl#35