-
-
Notifications
You must be signed in to change notification settings - Fork 719
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cap maximum shard size at the size of an integer #5141
Conversation
Supercedes dask#5134 Copying over the summary of that PR Works around the OpenSSL 1.0.2 bug demonstrated in issue ( dask#4538 ), except unlike PR ( dask#5115 ) which did this for reading, this does the same thing for writing. The error may be less likely to show up in the write path (as frames may simply be smaller than this limit). Still it seems like a good idea to protect against OverflowErrors from OpenSSL
Guessing we will want to replace this as well ( #5115 )? |
@@ -145,6 +145,8 @@ class TCP(Comm): | |||
An established communication based on an underlying Tornado IOStream. | |||
""" | |||
|
|||
max_shard_size = dask.utils.parse_bytes(dask.config.get("distributed.comm.shard")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to be 64 MiB
. Do we want to limit it to that? Would it be worthwhile to have a different value for TCP specifically?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR doesn't change the behavior for TCP (this value is what is used by default for everything in to_frames). All it does is make sure that TLS has a possibly different value that is capped by the max int size, which I think was the intent of the other PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. So this was 64 MiB
before?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup.
Co-authored-by: jakirkham <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @mrocklin @jakirkham
Supercedes #5134
Copying over the summary of that PR
Works around the OpenSSL 1.0.2 bug demonstrated in issue ( #4538 ), except unlike PR ( #5115 ) which did this for reading, this does the same thing for writing. The error may be less likely to show up in the write path (as frames may simply be smaller than this limit). Still it seems like a good idea to protect against OverflowErrors from OpenSSL
black distributed
/flake8 distributed
/isort distributed