You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been working with grpc-gateway and stumbled upon the following behavior of Protocol Buffers.
Say, we have a file server.proto that also imports a file some/location/http.proto. Then, in order to make, say, a generated file server_pb.py work, we need to make sure that a Python module some.location.http_pb.py actually exists and is accessible. Same applies to most other languages, I guess.
This may be a problem, for example if a package with the same name already exists, plus it makes the automation a bit harder. My real-life case with this is described in a bit more detail at grpc-ecosystem/grpc-gateway#298.
An alternative could have been, that we go through the entire dependency tree, resolve it and build into a single file, and then compile that file into one module. But, my guess is, there has been some certain idea behind the current behavior (more explicit dependency updating, perhaps?)
So, my question is: what's the common practice of importing .proto files from some external sources and making sure their compiled counterparts are also accessible? In my example with Python, I could only come up with a workaround: grpc-ecosystem/grpc-gateway#298 (comment).
Sorry if my explanation is too vague, I can try to provide a more concrete explanation of the problem, with code examples, if needed.
Thanks!
The text was updated successfully, but these errors were encountered:
I think the usual practice is to express the dependencies in your build system so that it can keep the generated code up to date, by rerunning protoc when a .proto file changes, for example. If you prefer you can also check in the generated code and have a script for updating it; this is a bit clunky but might be simpler depending on the situation.
For depending on .proto files in external repos, this varies a bit with each language and the language-specific packaging mechanism, but usually the easiest way to do it would be to just publish the generated code as part of the project artifact. This way if you depend on some package that uses protos, the package comes with the generated code and you don't have to run protoc yourself. The one exception would be C++, since it doesn't seem to have a dominant packaging mechanism and for C++ we don't attempt to keep the generated code compatible with different versions of the runtime library.
Hi folks,
I've been working with
grpc-gateway
and stumbled upon the following behavior of Protocol Buffers.Say, we have a file
server.proto
that also imports a filesome/location/http.proto
. Then, in order to make, say, a generated fileserver_pb.py
work, we need to make sure that a Python modulesome.location.http_pb.py
actually exists and is accessible. Same applies to most other languages, I guess.This may be a problem, for example if a package with the same name already exists, plus it makes the automation a bit harder. My real-life case with this is described in a bit more detail at grpc-ecosystem/grpc-gateway#298.
An alternative could have been, that we go through the entire dependency tree, resolve it and build into a single file, and then compile that file into one module. But, my guess is, there has been some certain idea behind the current behavior (more explicit dependency updating, perhaps?)
So, my question is: what's the common practice of importing
.proto
files from some external sources and making sure their compiled counterparts are also accessible? In my example with Python, I could only come up with a workaround: grpc-ecosystem/grpc-gateway#298 (comment).Sorry if my explanation is too vague, I can try to provide a more concrete explanation of the problem, with code examples, if needed.
Thanks!
The text was updated successfully, but these errors were encountered: