You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In some cases Netherite users have seen exceptions like the following:
Microsoft.Azure.EventHubs.QuotaExceededException: Exceeded the maximum number of allowed receivers per partition in a consumer group which is 5.
There appear to be multiple reasons that contribute to this. Both of these issues should be investigated and fixed.
The client hash algorithm does not retry, nor generate a useful error message, when there are more than 5 clients in a bucket, which can happen. I saw in traces that even with 128 hash buckets, it may take only 256 elements to cause exhaustion due to normal imbalance. Thus we need to deal with this case better. For example, we can change the client id and retry.
EH clients are shutdown late in the shutdown process, which appears to sometimes hang, causing clients to not be released. Thus the number of "live" clients can be much larger than the number of nodes in the system. We should (a) shut down clients sooner, and (b) investigate and fix the underlying causes for the hangs.
The text was updated successfully, but these errors were encountered:
In some cases Netherite users have seen exceptions like the following:
Microsoft.Azure.EventHubs.QuotaExceededException: Exceeded the maximum number of allowed receivers per partition in a consumer group which is 5.
There appear to be multiple reasons that contribute to this. Both of these issues should be investigated and fixed.
The client hash algorithm does not retry, nor generate a useful error message, when there are more than 5 clients in a bucket, which can happen. I saw in traces that even with 128 hash buckets, it may take only 256 elements to cause exhaustion due to normal imbalance. Thus we need to deal with this case better. For example, we can change the client id and retry.
EH clients are shutdown late in the shutdown process, which appears to sometimes hang, causing clients to not be released. Thus the number of "live" clients can be much larger than the number of nodes in the system. We should (a) shut down clients sooner, and (b) investigate and fix the underlying causes for the hangs.
The text was updated successfully, but these errors were encountered: