-
Notifications
You must be signed in to change notification settings - Fork 15.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AttributeError: 'NoneType' object has no attribute 'message_types_by_name' #10075
Comments
I saw this on one of my projects too. In my case, the common factor seems to be that the code ended up importing this particular module (using importlib) twice. The first time it worked, but the second time it died with that error. I was able to resolve the issue by having it cache the module after importing it the first time, and that fixed it. |
A Linux user reported another AttributeError with version 4.21.1 while with 3.20.1 it works.
|
Turns out this issue is also happening in a different project here at Relativity, one which does not use importlib and instead just uses a certain arrangement of import statements in a pytest suite. This one is not workaroundable and we're having to just pin protobuf to 3.x due to this issue. However, I was able to set up a minimal example of the issue using importlib. This is able to reproduce reliably on Ubuntu 20 with protoc 3.21 and protobuf python version 4.21.1. First create foo.proto:
Then create test.py: import importlib
for counter in range(0, 2):
imported_process = importlib.util.spec_from_file_location("foo_pb2", "foo_pb2.py")
module = importlib.util.module_from_spec(imported_process)
imported_process.loader.exec_module(module) Then compile and run it:
This gets:
|
We are running similar issue on Windows. Basically we generated the python files using protobuf 3.19.4 and the using python protobuf package 4.21.1 result into following errors:
|
In my case the problem was caused by adding a module directory to sys path and using |
Upgrading protobuf fixed this issue for me |
The example I posted above is still broken with protobuf 4.21.2 |
I'm experiencing the same issue using Tried using pip protobuf packages I have not had any success using different package import resolution methods.
Could be because im still running Python 3.9 - seems like there is a soft suggestion that python 3.9 has a specific protoc version linked to it. If anyone knows let me know. There is a brew protobuf target for python3.9. protobuf brew |
An update on this since I ran into this issue while integrating the new Protobuf version with gRPC. The issue seems to be that Since the symbol DB is a process-level singleton, this means that the module-level code of a The issue in our repo was that there was a latent bug in our test runner that imported each _pb2 multiple times under different names. By ensuring that each |
how to do this |
can you explain or provide the changed files |
Hey we have run into the similar issue on Ubuntu 16.04 running Python 3.7 and this has been a problem for some time. I had no success using different packages or any other suggested solution.
Any suggestion would be appreciated. |
I have the same error as seen below. File ~\AppData\Roaming\Python\Python39\site-packages\gmsh.py:53, in File C:\ProgramData\Anaconda3\lib\ctypes_init_.py:364, in CDLL.init(self, name, mode, handle, use_errno, use_last_error, winmode) TypeError: argument of type 'NoneType' is not iterable` help me to overcome this problem. |
I was able to fix our issues by upgrading
Installing just I wanted to understand what had happened, so I dug into the code. In my case, I was running on an M1 Mac. All of our builds were running perfectly fine under our x86-based infra which was the first clue. Looking into the code on version 1.37.1: protobuf/python/google/protobuf/descriptor.py Lines 60 to 74 in 909a0f3
protobuf/python/google/protobuf/descriptor.py Lines 966 to 985 in 909a0f3
Note that it attempts to load in extensions. If they aren't present (which they aren't by default under ARM64), you end up with a completely different metaclass for the descriptor. In our case, we were calling protobuf/python/google/protobuf/descriptor_pool.py Lines 204 to 216 in 909a0f3
|
As MattDietz mentioned, you need to return in AddSerializedFile function. All you have to do is to write
in AddSerializedFile function in descriptor_pool.py file |
Yeah, this is absolutely an issue when hit in a Jupyter notebook. We use GRPC to communicate with services in our notebooks and the pb2 file is loaded twice and there's not much we can do about it. A temporary work-around is to downgrade your
I've confirmed that this fixes the issue. |
@haberman Can someone on the protobuf team please look into fixing this? While it might not be orthodox to load a single proto into the same process multiple times, it's clearly something that has seeped its way into many people's workflows. |
Fix is pending in: protocolbuffers/upb#804 |
The fix has just been submitted so imma close this. |
Which release on pip will have this solved? |
Doesn't seem to be fixed in 4.21.7. Have just tripped over this setting up a Windows 10 VM with
|
We are cutting a release shortly that will have the fix. This should land in the next week or so. |
@haberman I just upgraded to protobuf (4.21.8) with grpcio & tools (1.50.0) and still getting same error :-( |
I'm also still seeing it in 4.21.8 |
For people who are still seeing the error: are you using pure-Python or the C acceleration (upb)? What is output if you execute this?
My theory is that we fixed this for api implementation type |
The two python environments that I'm using the protobufs in both report upb. |
Same here - both my python envs report |
It looks like the fix was not included in the release due to a hiccup in the release process. Sorry about this -- we will be releasing again within the next week and will assure that the fix goes in. |
since this is not yet released, can you please keep this issue open? |
Sure, re-opening until the fix is released. |
Please don't forget this fix :) |
Just wanted to check back in here and see if we're still planning on releasing this fix soon - we're still waiting on this to be fixed so we can update our version and take advantage of the newer features. |
I'm hitting it too. Seems to be specific to newer Python 3 versions? |
This was fixed in 4.21.9. Here is a minimal repro, you can verify it no longer occurs in 4.21.9: # test.py
from google.protobuf import descriptor_pool as _descriptor_pool
desc1 = _descriptor_pool.Default().AddSerializedFile(b'\n\ntest.proto\"\x03\n\x01M')
desc2 = _descriptor_pool.Default().AddSerializedFile(b'\n\ntest.proto\"\x03\n\x01M')
print(desc1)
print(desc2)
assert desc1 is not None
assert desc2 is not None
assert desc1 is desc2 Demo:
|
Co-authored-by: Dave La Chasse <[email protected]> Co-authored-by: Ashraf Shaik <[email protected]> Note: W&B partners with the Dagster team on this. If you are eager to use this integration for your own projects, please hold while we get this polished. Don't hesitate to provide feedback or ask questions :) ### Summary & Motivation This PR contains the integration code with [Weights & Biases](https://docs.wandb.ai/). Making it easy within Dagster to: - use and create W&B [Artifacts](https://docs.wandb.ai/guides/artifacts) - use and create Registered Models in W&B [Model Registry](https://docs.wandb.ai/guides/models) - run training jobs on dedicated compute using W&B [Launch](https://docs.wandb.ai/guides/launch) - use the [wandb](https://github.com/wandb/wandb) client in ops and assets You will find code for the integration and examples on how to use the integration. For more in depth documentation check the [current draft](https://docs.google.com/document/d/10qOyhbJKnJR4kazGqMF22db6UhXpeHop3_QV90eaiW4/edit). **Useful references:** - https://docs.wandb.ai/ref/python/artifact - https://docs.wandb.ai/ref/python/run - https://docs.wandb.ai/guides/launch **Notes and questions for the Dagster team:** - We're using the local storage to store downloaded Artifacts. Does Dagster always run with a local filesystem that we can use? And what would happen when there is no more disk space? - We linked to `https://docs.dagster.io/integrations/wandb` in the `examples/with_wandb/README.md`. Can you confirm it's where we documentation would eventually live? - I'm expecting your review to be mostly on the form over the content but we're open to refactor the code to suit Dagster coding style better. Don't hesitate to be vocal if something could be improved. ### How I Tested These Changes This integration has been extensively tested manually and through unit tests (included in the code). **Known Issues** tox fails currently due to a protobuf bug. There is an open Github [issue](protocolbuffers/protobuf#10075). I was able to remove the error by pinning another version but I'm not sure of the implications. Probably better to wait for a fix until we release this. --------- Co-authored-by: Dave La Chasse <[email protected]> Co-authored-by: Ashraf Shaik <[email protected]> Co-authored-by: yuhan <[email protected]>
Co-authored-by: Dave La Chasse <[email protected]> Co-authored-by: Ashraf Shaik <[email protected]> Note: W&B partners with the Dagster team on this. If you are eager to use this integration for your own projects, please hold while we get this polished. Don't hesitate to provide feedback or ask questions :) ### Summary & Motivation This PR contains the integration code with [Weights & Biases](https://docs.wandb.ai/). Making it easy within Dagster to: - use and create W&B [Artifacts](https://docs.wandb.ai/guides/artifacts) - use and create Registered Models in W&B [Model Registry](https://docs.wandb.ai/guides/models) - run training jobs on dedicated compute using W&B [Launch](https://docs.wandb.ai/guides/launch) - use the [wandb](https://github.com/wandb/wandb) client in ops and assets You will find code for the integration and examples on how to use the integration. For more in depth documentation check the [current draft](https://docs.google.com/document/d/10qOyhbJKnJR4kazGqMF22db6UhXpeHop3_QV90eaiW4/edit). **Useful references:** - https://docs.wandb.ai/ref/python/artifact - https://docs.wandb.ai/ref/python/run - https://docs.wandb.ai/guides/launch **Notes and questions for the Dagster team:** - We're using the local storage to store downloaded Artifacts. Does Dagster always run with a local filesystem that we can use? And what would happen when there is no more disk space? - We linked to `https://docs.dagster.io/integrations/wandb` in the `examples/with_wandb/README.md`. Can you confirm it's where we documentation would eventually live? - I'm expecting your review to be mostly on the form over the content but we're open to refactor the code to suit Dagster coding style better. Don't hesitate to be vocal if something could be improved. ### How I Tested These Changes This integration has been extensively tested manually and through unit tests (included in the code). **Known Issues** tox fails currently due to a protobuf bug. There is an open Github [issue](protocolbuffers/protobuf#10075). I was able to remove the error by pinning another version but I'm not sure of the implications. Probably better to wait for a fix until we release this. --------- Co-authored-by: Dave La Chasse <[email protected]> Co-authored-by: Ashraf Shaik <[email protected]> Co-authored-by: yuhan <[email protected]>
Version: 4.21.1
Language: Python
Windows 10
protoc-21.1-win64
Using version 3.20.1 it works, but with 4.21.1 I get
AttributeError: 'NoneType' object has no attribute 'message_types_by_name'
https://github.com/oldnapalm/zwift-offline/blob/1b3e9d16e903b452d37a77675a38b402abb1e431/protobuf/per_session_info_pb2.py#L22
One difference from this message to others that work is that it has only a repeated field
https://github.com/oldnapalm/zwift-offline/blob/1b3e9d16e903b452d37a77675a38b402abb1e431/protobuf/per-session-info.proto#L9
The text was updated successfully, but these errors were encountered: