-
Notifications
You must be signed in to change notification settings - Fork 653
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tracking issue for grpc-js invalid metadata errors #1173
Comments
Using:
I don't know the library exactly, I have the following:
Full error (from stackdriver console):
Related to googleapis/nodejs-pubsub#818 maybe. |
Thank you for the report. That looks consistent with the other reports we have seen: like the others, the key is 4 apparently random characters and the value is a formatted timestamp. |
We are experiencing the same issue. Failed to add metadata entry ����: Sat, 07 Dec 2019 14:56:38 GMT. Metadata key "����" contains illegal characters. For more information see #1173 Is there anything we can do to get the fix for this accelerated? |
As mentioned in the original issue, please say what library you are using, and if possible what the domain name is of the server you are accessing. Other than that, the most helpful thing you or anyone else could provide is a full |
We are using @google-cloud/[email protected] and [email protected]. The issue is related to using firestore APIs. It does not say which server it is connected when the error happens. node_modules/@google-cloud/firestore/node_modules/@grpc/grpc-js/build/src/metadata.js Line 31
We are performing several million operations per day and this happens frequently across independent firestore instances. So it's very likely that this is a software issue present on all your servers or in the client library. Here is a sample across two regions with originating region (where the request is made). There is two firestore instances. One in us-central1 and one in europe-west1. |
Also the error is just logged to the console. The consuming firestore layer does not properly subscribe to this error and propagate it. It therefor fails silently. We also re-read and check the data we wrote. So it's possible that this is an issue that does not affect the data being written or read since we have not detected any corruption on the data and we check every record by reading it back after writing it. |
Also within firestore it uses @grpc/[email protected] So I'd assume it actually uses that implementation of grpc not the grpc package we use otherwise. |
We won't be able to give you a tcpdump due to the volume of data and frequency of occurrence and other operational considerations. But if you have a beta version of firestore that has more info when the error happens then we can give that a try in our staging environment. |
Thank you for the extra details. From that log image it looks like you are seeing this error probably 20-30 times out of the several million requests you make per day. Would you say that is accurate? I want to note that so far we believe this corruption only impacts headers, and we have only seen reports so far of it impacting the Overall, your report is consistent with the other reports of this bug that we have seen, so unfortunately that does not help us significantly with figuring out what is wrong. |
Same issue on several nodejs applications on GKE clusters @grpc/grpc-js "version": "0.6.14" I am using Google Pub-Sub ("@google-cloud/pubsub": "1.1.6") and Winston for logging on all the container having the issue. If you have a debug version of the @grpc/grpc-js library that you'd like me to run to gather more information, I would be happy to do.
|
Same issue showing up here |
same issue happening over several million messages
|
@yatingoswami I just want to make sure I understand: is that the total number of times you observed that warning in several million requests? |
We are using
|
We're using
|
using
together with (not sure which if any of these are relevant) |
I think this started happening after we started using Firestore batch updates (NodeJSv10 on CloudRun)
|
code is at github.com/icco/validator |
We perform cca 2-3 K operations daily
|
To anyone who has already commented here, can you please comment if you encountered this message while running your code somewhere other than a Google Cloud Platform server (GCE, GKE, Cloud Run, Cloud Functions, Firebase Functions, etc.)? |
Encountered in local dev here. Accessing Firestore via Firebase Admin SDK as part of a data migration:
|
@nateq314 Thank you. |
We have a library that connects to both Spanner and BigQuery, however we see the following metadata error when making requests to Cloud Spanner. All of our environments are GKE based in different projects and regions. This error at the moment is specific to one region and GKE cluster.
The relevant gRPC library versions follow:
Eventually our process goes OOM as follows:
As mentioned this is one environment out of 6. |
@Crispy1975 Thank you for the report. How frequently are you seeing that metadata error? And would you mind saying which GKE region is experiencing those errors? |
@murgatroid99 apologies for the delayed response. We are seeing the messages every 20 minutes in one environment and less often in another, both on the GCP region
Edit: I just double-checked some of our other envs and these also have the same errors, albeit less frequent. Other regions are: |
We encountered this problem while moving from AWS NLB to AWS ALB with grpc support and migrating from grpc-node C++ to grpc-js in the server. We're using
I enabled the trace and logs in the server and for us, it seems to be failing the request every time it happens. Here's an example of a request from web service to data service.
Could it be something with AWS or something that should be addressed in the server/client? |
That's new. This is the first instance I've seen of the We don't know why these errors happen. This issue is for information gathering, so that we can try to figure it out. It looks like that client is talking to your own server through the ALB. Is that correct? Can you trigger that error with any degree of consistency? If so, can you additionally set the environment variable |
Yes, the server and the client are talking to each other through ALB. The problem is not consistent in the sense that it won't always happen with the same requests. However, it always happens somewhere when we run automation. We are using
|
Unfortunately, it doesn't look like those extra logs have any useful information, sorry. I do think your information is strong evidence that that this is an issue in Node itself, as opposed to some other server or proxy, but I don't even know where to start looking for the root cause, and I don't know how to get enough relevant information to file a solid, fixable issue for the Node core. |
We're running more tests to confirm and we had to upgrade a bunch of dependencies but it seems from the preliminary tests that it works with Node Update: |
AWS EC2 I have issue in Node.JS: 10.16.3
I recently upgrade node to 14.17.0 and try it. |
Node: v14.17.5
|
Closing because this has not been reported in over two years. The root cause has likely been fixed. |
Same issue, using "@google-cloud/pubsub": "^4.5.0" �w.`��y���:��{w_:6������my" contains illegal characters. For more information see.... |
This is a tracking issue for the following error message output by
@grpc/grpc-js
:This is not a fatal error. Most applications should work as normal after seeing this. This error message indicates that a key/value pair has been omitted from the response metadata object as a result of some kind of data corruption.
This problem was originally reported at googleapis/nodejs-datastore#415. So far, all detailed reports have a timestamp as the value in the error message.
If you see this message with timestamp metadata values when making a request to a Google server from a Google Cloud server using a Google Cloud API client library or Firebase, please add a +1 reaction to this comment to help us track how frequently it happens
If you received this message using a different library or from accessing a different server, please share the following information to help us find the root cause of the error:
In addition, if you can get such a thing it would be very helpful to have a
tcpdump
of an unencrypted TCP session that triggers this error.The text was updated successfully, but these errors were encountered: