-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add back compression and encryption #877
Conversation
S3 currently does not have proper bucket cleanup after testing leading to `TooManyBuckets` errors on tests after a certain point. This adds the cleanup logic to every integration test. Furthermore, the hadoop jdk installs were removed as they should not be required anymore.
Instead of storing the vcpu limits as a hardcoded variable inside Planner, created a new .csv file that includes that information. When Planner is created, we read this csv file. Wrote test cases that include fake quota limits.
Edited the AzureBlobInterface class: * Wrote the logic for staging/uploading a block for a multipart upload in the method upload_object * Created two functions to 1) initiate the multipart upload and 2) complete the multipart upload * Since Azure works differently than s3 and gcs in that it doesn't provide a global upload id for a destination object, I used the destination object name instead as an upload id to stay consistent with the other object stores. This pseudo-upload id is to keep track of which blocks and their blockIDs belong to in the CopyJob/SyncJob. * Upon completion of uploading/staging all blocks, all blocks for a destination object are committed together. More things to consider about this implementation: Upload ID handling: Azure doesn't really have a concept equivalent to AWS's upload IDs. Instead, blobs are created immediately and blocks are associated with a blob via block IDs. My workaround of using the blob name as the upload ID should work since I only use upload_id to distinguish between requests in the finalize() method Block IDs: It's worth noting that Azure requires block IDs to be of the same length. I've appropriately handled this by formatting the IDs to be of length len("{number of digits in max blocks supported by Azure (50000) = 5}{destination_object_key}"). --------- Co-authored-by: Sarah Wooders <[email protected]>
* Modified the tests so that they load from an actual quota file instead of me defining a dictionary. * Modified planner so that it can accept a file name for the quota limits (default to the skyplane config quota files) * Added more tests for error conditions (no quota file is provided + quota file is provided but the requested region is not included in the quota file) --------- Co-authored-by: Sarah Wooders <[email protected]> Co-authored-by: Asim Biswal <[email protected]>
So in the current implementation, my understanding is that each time data moves between a VM it is compressed and decompressed? This is fine for now, but maybe we should add an issue so in the future the data is compressed/encrypted just once when it is read from the object store. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a few minor nits but looks great otherwise! Feel free to merge once cleaned up.
@sarahwooders I don't think so? all of these are specified in the gateway program; it'll only be decompressed or decrypted if you set it explicitly in the gateway programs of VMs located in destination regions |
No description provided.