-
Notifications
You must be signed in to change notification settings - Fork 15.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Figure out upb-situation vs. grpc; do not remove deprecated C++-backend until then #12927
Comments
We do not have a short term plan to remove the C++-backend. Likely that will be on the radar in a year or so, but we can probably figure out a better answer here first. |
@veblush FYI |
FWIW this is what we do with the Python wheel currently. The native extension links upb statically but uses a version script to hide all symbols except the entry point
For the moment I think this is an elegant solution to the problem, as it makes the extension self-contained and allows maximum linker stripping. Python Protobuf and gRPC do not actually share any data at a C level, so there is no compatibility reason to have them share a upb shared library. Size-wise, upb is very small, so the duplication will have little effect on binary size. At some point in the future we may want to allow Python <-> C(++) sharing of upb messages. At that point I think it would make sense to have a There are also some tricky questions around how sharing would work, particularly when it comes to schemas and That said, I would also be open to using SO versioning to give every release of upb its own major version number. |
Just saw that grpc switched to a submodule for upb instead of outright vendoring, which is definitely a step in the right direction! Not sure what other plans people have come up with in the meantime (i.e. if there will be a shared upb commit that gprc & protobuf point to), would be happy to have an update if there's anything to share already. |
We triage inactive PRs and issues in order to make it easier to find active work. If this issue should remain active or becomes active again, please add a comment. This issue is labeled |
Not stale. |
gRPC and upb are going forward in this direction but it's not happened yet as upb is working to add cmake support. Once upb has it, gRPC will completely treat it as a regular dependency like protobuf. |
We triage inactive PRs and issues in order to make it easier to find active work. If this issue should remain active or becomes active again, please add a comment. This issue is labeled |
I just noticed #15671 through the announcement about removing
The most recent grpc 1.62 still vendors upb and ships a bunch of
To understand correctly, you'd recommend building a static upb as part of the building the python bindings (but not shipping it)? |
grpc 1.63 still vendors upb and the C++ backend was removed in protobuf. What's the situation here now? |
My goal is that our source package, as published on PyPI, would do this automatically without any special intervention on your part. It looks like that is not currently the case. I just tried this and got:
We are currently exporting more than 600 symbols in We should fix this by shipping our linker script version_script.lds in the source package and adding a command-line flag to our setup.py to reference the linker script. If we do that, then a normal build of our source package should result in a module that does not expose any of the upb API, which should avoid conflicting with anything else. Would that be a satisfactory resolution for this issue? |
Thanks for the quick response!
I understand that this is desirable for the general case on PyPI, but it is at odds with how we need to distribute things (we can control versions and ABI-compatibility to a high degree, so if there's a lib providing the required symbols, we want to use that instead of doing a duplicated rebuild of those symbols in the python bindings). In the past, we've patched the builds of the protobuf python bindings to link directly against The fact that grpc ships
What I'd love is to have a semi-supported way to build the python bindings against existing libraries, i.e. don't rebuild |
There are two main obstacles to this: The first is that the upb API is unstable. Unlike our user-level APIs in Python, Ruby, PHP, etc. the upb API is only meant for our internal use, and it breaks semi-frequently. Because it is not a user-level API, we don't track these breaking changes with SemVer, so there is no version number to help you determine how to keep Python and upb compatible. The Python source packages published by us will take both Python/upb and upb from a single revision in our repo, and thus will always perfectly match. If you are trying to match Python/upb and upb that were distributed separately, there is no straightforward way of doing this. The second is that the upb API has many optional parts, so
These file names were created by gRPC's build system -- upb does not standardize these artifact names. Also, it looks like gRPC is not including some parts of upb that are used by Python/upb, such as protobuf/upb/util/def_to_proto.h Lines 21 to 22 in f70d90b
I think this is what I proposed in my previous message? |
I get this. It would be much easier though if grpc used an unmodified submodule, rather than vendoring it + potentially layering changes on top. If it was a submodule, we could could probably find an intersection of compatible upb Even better would be if grpc/protobuf agreed on the upb that they vendor, or that upb gets a version (though I understand that this is not realistic for now).
Yeah, that would at least be workable. I'm not a huge fan though, not least because - as you showed - it can happen relatively easily that these symbols start leaking. |
Just scrolled upthread a bit, and this would be a pretty good solution IMO, all things considered. |
Unfortunately I made that comment before considering that we don't have a clear set of APIs to put into |
@haberman, given that no other solution seems to be in play, could you perhaps open a PR that does what you suggested above:
I'm happy to test patches against our infrastructure. |
Hi @h-vetinari, I created #17207. Give it a spin and see if it works for you. To build the Python source package as we distribute it, you can run this command:
|
We triage inactive PRs and issues in order to make it easier to find active work. If this issue should remain active or becomes active again, please add a comment. This issue is labeled |
Hey @haberman, sorry for the long delay, the forced switch to protobuf really did a number on us... I've picked the change from #17207 into conda-forge/protobuf-feedstock#215, which is where we now finally unblocked the situation after much heartache. I haven't tested the visibility of the upb symbols yet, but I'm hopeful that this should work. 🤞 We'll find out soon once we button up the remaining issues and start migrating our builds over to the newly-built protobuf/grpc. In any case, that PR looks like it didn't get merged, it should probably be revived? |
I've refreshed it in #18467. If it helps your use case, I can submit the change. |
Which switch do you mean? I don't think I quite understand your setup; I had thought you were building your distribution from our source package. Has our PyPI source package changed in any major way lately? |
We're building from source, not the source package. That's partially because we support more architectures than you do (e.g. Before, we were building Consequently we ran into a bunch of issues (e.g. #12947), not limited to the fact that building protobuf with bazel creates a bootstrapping issue for us, as (our) bazel itself depends on grpc/libprotobuf. You can click through conda-forge/protobuf-feedstock#215 to get a sense of it. The fundamental issue (IMO, from previous such discussions) is that the (third-party) binary distribution case is diametrically at odds to what many projects (and especially projects built by bazel) consider the primary installation channel for their users. It's (often) considered too hard to get right, so it's the last thing on anyone's mind, but such distribution is exactly our use-case, so we keep hitting these things (c.f. bazelbuild/bazel#20947, grpc/grpc#33032, #13726). We would love to build from an unpatched source if we can make it to a point where our key requirements can be met. I'm happy to contribute back bits and pieces as appropriate, but step 0 in this is to understand the constraints/requirements of cross-platform, rolling, binary distribution (ideally based on shared libs) that conda-forge represents. I'm happy to explain more about that, e.g. how we ensure that ABI isn't broken and users get a compatible set of packages at all times. |
Hi all
I help package various things for conda-forge, including protobuf, grpc, abseil, etc. With the deprecation of the C++ backend for the protobuf python bindings, we're in a bit of a problem, because grpc will already install its vendored upb into
$PREFIX/lib
.As far as I understand from discussions with @coryan:
This is problematic because we now cannot change to the upb-backend for
protobuf
without putting upb on the path twice (even if one of them is insite-packages/lib
), especially not if those two versions can be incompatible with each other (symbol clashes, segfaults, and other fun would ensue).I don't have the full picture here, but from what I can tell, the options are:
upb
version between protobuf & grpc (unlikely?)libupb.so
), so that different versions can coexist (whichlibprotobuf
started doing to some degree for 4.x already)Before that situation is not solved, I implore you not to actually execute the deprecation of the C++ backend, because without that we'd be up a certain creek without a paddle.
The text was updated successfully, but these errors were encountered: