Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rollover correlation id. addresses #344 #345

Closed
wants to merge 1 commit into from

Conversation

richchang0
Copy link

when the correlation id hits 2,147,483,648 (one over int32 MAX), then the driver can't encode that message, and hits an error.

So in this changed I've done two things.

  1. change the encoder to use an unsigned number from a signed for the correlation_id
  2. removed itertools and replaced it with a normal python integer, with every _next() call converting the number to a ctype uint_32 to allow for an overflow to roll over back down to 0.

Not sure if this is the better design choice than just doing a check of itertools.count to see if it hit the intmax, then resetting it.

@dpkp
Copy link
Owner

dpkp commented Mar 13, 2015

looks great -- can you add a test ?

@dpkp dpkp added this to the 0.9.4 Release milestone Mar 24, 2015
@@ -52,7 +52,7 @@ def _encode_message_header(cls, client_id, correlation_id, request_key):
"""
Encode the common request envelope
"""
return struct.pack('>hhih%ds' % len(client_id),
return struct.pack('>hhIh%ds' % len(client_id),
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the kafka protocol defines this field as a signed int32. i think it's better to stick with that.

@dpkp
Copy link
Owner

dpkp commented Mar 30, 2015

take a look at pr #355

@dpkp
Copy link
Owner

dpkp commented Mar 30, 2015

merged #355 -- thanks for looking into this!

@dpkp dpkp closed this Mar 30, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants