-
-
Notifications
You must be signed in to change notification settings - Fork 816
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Packages hash should inherit their parent package hashes #2719
Comments
It does have some problems, but directly modifying the hash also introduces many problems.
Although the current hash logic does have some problems, computing reliance on hash will lead to more other problems. Maybe we need a better solution that avoids breaking current compatibility as much as possible. I'll consider improving it in future releases, but 2.7.1 won't handle it until there is a perfect solution. Because I'm going to lock 2.7.1, ready to release. |
The ABI is not the only problem, since xmake compiles almost every dependency as static, we will basically have two versions of packages existing. I had the issue in the past with two differents versions of openssl colliding in two differents libs, resulting in very bad crashes.
I know, which is why it should be done on a minor release instead of a patch release.
Indeed, this will be an issue, but I'd say it's already an issue actually, since libcurl (for example) updates don't recompile every other package, with static link there are a lot of different libcurl versions in multiple prebuilt packages. Each package using static deps will use the version that was released when it was built and won't get update later.
If it was possible to track the dependencies version used to build a package, maybe we could use the "old hash" if version and configs matches. That's another big issue I didn't think about: configs add_requires("foo", { configs = { shared = true } })
add_requires("bar") -- is bar.foo static? Here, bar.foo should be shared as well, except if bar forces the shared config to false. It should be the same for all configs, duplicate package entries should only occurs when two dependencies are added with explicit unmatching configs. I understand this is complicated, maybe something for xmake 3.0 (do you have anything planned for xmake 3? like breaking changes in the interface) |
No, the two top-level packages are not related, even if they have dependencies. bar.foo is still static, you should only use |
Sure, but then we will have the same package with multiple configs/versions, this can be avoided (and prevents some issues that can arise with it) automatically most of the time. |
But 99% of libcurl version updates don't break abi, even without recompiling other packages. So 99% of the time, other packages can still use their previous precompiled products. However, if the hash is modified, then 99% of the libcurl version updates will cause the precompiled products of other packages to fail. That is, in order to avoid the 1% possible compatibility issues, let the user need to recompile everything in 99% of the cases. This is not a solution that I can accept. |
ABI is really not the only problem. Big libraries like libcurl get updated for security fixes regularly, due to the way xmake works now, the only way to be sure to receive that security fix in every libcurl version used by the project is to uninstall all packages (not just libcurl but all packages using it, or even packages using packages that uses libcurl) and recompile all of them (disabling precompiled packages because some of them would carry the previous libcurl code), all of this because of static linking.
Not really to fail, but to requires a recompilation, this is the way Cargo works (since it relies a lot on static linking too), even though of course Rust and C++ are different beasts. It's very common to have to rebuild all dependencies (crates) with Cargo, I don't think it's a big issue, even though Rust compiles faster than C++, there's also a lot more of crates (packages) used by even a small application. Two things could be done however to minimize the issue:
This way, most users won't be affected, even though I think it should be the default option in the future, for now this is a way to fix it. Solution 1 would be great for my own packages and probably some others on xmake-repo, which is why even though you add a policy (solution 2), solution 1 should co-exist for now. |
I'm not just talking about abi, but the probability of problems after subpackages are updated. Libraries like zlib and libcurl, even if the version is updated, there will be no problems in 99% of cases, there is no need to reinstall all packages. I think if I just update a version of zlib, I have to make the big libraries such as boost and grpc invalid and recompile, and I cannot use the precompiled artifacts. Many users are unacceptable.
This doesn't solve the problem, there is absolutely no need to set it if the new libcurl update doesn't affect anything.
Maybe I will add a policy or other configuration methods to let users control the hash rules of their own project packages, or I may configure some options to the package definition to let xmake know that the new version may break compatibility with other packages. But how to do it specifically, I will think about it carefully when I improve it. But now I have no more good ideas. |
libcurl is a bad example here, I agree. On my own xmake-repo (https://github.com/NazaraEngine/xmake-repo) I have three libraries working together (nazarautils, nzsl and nazaraengine), an update in nazarautils can fix a bug in nazaraengine, same goes for nzsl. With something like I understand that with C, it doesn't matter a lot, but with C++ where you have templates, a very unstable ABI and a lot of inline functions, it becomes interesting to have such an option. |
If many packages depend on nzsl, then we have to modify a lot of packages. If this bug affects other packages, it should be configured in the nzsl package to tell all parent packages to recompile. for example add_versions("1.0", "xxxxx", {compatibility = false}) |
Indeed, it's a good idea that nazarautils "notifies" nzsl/nazaraengine that an update requires a recompilation. However this can be very verbose if that can happens at every update (which would be the case with my libs) |
If each version breaks compatibility, you can set a global policy for this package, like this set_policy("package.version_compatibility", false) |
I've tried this and we can't get the buildhash of the dependencies into the buildhash of the current package. This is because the buildbash is already heavily used in the process of loading packages, but at this time their dependencies have not yet been loaded and their buildhash cannot be retrieved. We can't even use package:installdir() in on_load anymore, because it also uses buildhash, but in on_load we haven't started loading deps yet. It didn't seem as easy to achieve it as I thought it would be. |
If the hash can't be used directly, maybe the parent config (and version) can be fetched instead? |
How to do it? |
I took a look and yeah it seems a bit complicated, package dependencies need to be resolved before package key gets computed |
Maybe we don't need to change the buildhash, just disable fetch and let the user go back to compiling and installing it. |
I don't quite understand the solution here, except by uninstalling the package, how can the user handle this? |
I have improved it, you can try it. #2781 we can use package("libpng")
add_deps("zlib")
set_policy("package.librarydeps.strict_compatibility, true) Or we can also use set_policy("package.librarydeps.strict_compatibility, true)
add_requires("libpng") you need reinstall package first. for example: I updated zlib 1.2.11 => 1.2.12 in package() Dependency compatibility is not strictly tested by default$ xmake f -c
do nothing Enable strict compatibility for librarydeps$ xmake f -c --policies=package.librarydeps.strict_compatibility
note: install or modify (m) these packages (pass -y to skip confirm)?
in local-repo:
-> libpng v1.6.37 [deps:*zlib]
please input: y (y/n/m) |
Would it be possible to add it also as a dependency parameter in a package? package("nzsl")
add_deps("nazarautils", { strict_compatibility = true }) this would make it possible to recompile a package only if some of its dependencies is updated, instead of all of them. |
recompile nzsl? or nazarautils? If you recompile nazarautils, it should be able to achieve this. |
I don't think you understand. package("nazarautils")
...
package("nzsl")
add_deps("nazarautils", "fmt")
add_deps("frozen", "ordered_map", { private = true }) I'd like nzsl to require a recompilation only if nazarautils gets updated, not fmt/frozen/ordered_map |
The configuration parameters for In addition, it is more complex to implement. |
I understand, maybe an addition for the future with another API then, it's already great to have it working with the new policy. |
Add a new limit this package and it's all depspackage("nzsl")
set_policy("package.librarydeps.strict_compatibility", true) limit it's all child packages and this packagepackage("nazarautils")
set_policy("package.strict_compatibility", true) |
|
|
Yes, but some header-only libraries could enable |
try this #2792 |
Looks like it's working fine! 👍 |
I think I noticed a small issue, deps bypass on_fetch:
this occurred on a project using my engine (nazaraengine), which is found using on_fetch. however updating nzsl and nazarautils broke the on_fetch, forcing me to install the engine |
But I don't know how to reproduce it, maybe you can debug here.
|
I managed to track it down, but I'm not sure it's a bug, since the package policy says "strict compatibility" the only way to assure that is to force install (even if it was found using on_fetch). However, maybe on_fetch could do something to prevent strict compatibility, I made a simple proposal here: with something like: on_fetch(function (package)
local nazaradir = os.getenv("NAZARA_ENGINE_PATH")
if not nazaradir or not os.isdir(nazaradir) then
return
end
local includedirs = path.join(nazaradir, "include")
local libprefix = package:debug() and "debug" or "releasedbg"
local linkdirs = path.join(nazaradir, "bin/" .. package:plat() .. "_" .. package:arch() .. "_" .. libprefix)
local config = build_config(package)
return table.join2({
includedirs = includedirs,
linkdirs = linkdirs,
check_compatibility = false -- here
}, config)
end) However I'm not sure this is the best way to implement this, especially since calling :fetch() may try to fetch the package again if it didn't find it the first time |
This does not seem to be a bug, and if its dependency is updated, it should be strictly reinstalled.
yes, It's not the best way to go and I wouldn't consider it for now. |
Indeed, it's not a bug, but it also means that on_fetch may become useless on such packages, it even leads to inconsistent behavior since the first time the package will be found, and it's only when an update of a dependency occurs that xmake will ask to reinstall this. Maybe we should disable strict compatibility for package found using on_fetch |
However, if it is disabled, updates to the static dependencies can also break compatibility. I think if a dependency library is configured for strict compatibility, then all libraries that depend on it should always be compiled and installed, rather than using the system libraries. |
Yeah, but then on_fetch has no purpose for those libraries. If we disable strict compatibility when a package is found using on_fetch, we leave the choice to the user (to use |
However, many users encounter a compatibility error and do not know what is happening, nor do they know that they should disable the system library in order to fix it. So every time user encounter a problem like this, they might open an issue and ask the developer to troubleshoot and analyse the problem. |
I understand your point, however this makes it impossible for me to work with my local version of my own library (for quick fixes and tests) through xmake package system. |
Is your feature request related to a problem? Please describe.
As of today, xmake handles packages by computing a hash for them, this hash is built based on package version, configs options, platform, arch, etc.
This means that if a newer version of a package is available and require lock isn't used, xmake will propose to upgrade a package, which is great.
However, if package
bar
depends on packagefoo
, and packagefoo
gets updated, packagebar
key won't change, this can lead to issues.foo
andbar
aren't found on the computer, no problem, xmake will installfoo
and then installbar
usingfoo
.foo
receives an update (security update for instance)foo
, but notbar
foo
andbar
forapp
This can be problematic for a lot of reasons:
So until
bar
is updated, this will be an issue, and of course package don't receive an update every time their dependencies is updated (especially for minor/security fixes)Describe the solution you'd like
A simple solution would be that packages inherit their parent configs when building the hash, so when the dependencies hash changes, the package hash changes too.
Of course, this means that nearly all packages hash will change if this update is applied, which was also the case with vs_runtime. I don't think it would be a real issue to do it, especially for xmake 2.7.1
Describe alternatives you've considered
The only alternative I can think of is to freeze the package version to be the same on both sides, or force uninstall both packages when an update occurs.
Additional context
No response
The text was updated successfully, but these errors were encountered: