-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
0.3.1 release #882
Merged
Merged
0.3.1 release #882
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
S3 currently does not have proper bucket cleanup after testing leading to `TooManyBuckets` errors on tests after a certain point. This adds the cleanup logic to every integration test. Furthermore, the hadoop jdk installs were removed as they should not be required anymore.
Instead of storing the vcpu limits as a hardcoded variable inside Planner, created a new .csv file that includes that information. When Planner is created, we read this csv file. Wrote test cases that include fake quota limits.
Edited the AzureBlobInterface class: * Wrote the logic for staging/uploading a block for a multipart upload in the method upload_object * Created two functions to 1) initiate the multipart upload and 2) complete the multipart upload * Since Azure works differently than s3 and gcs in that it doesn't provide a global upload id for a destination object, I used the destination object name instead as an upload id to stay consistent with the other object stores. This pseudo-upload id is to keep track of which blocks and their blockIDs belong to in the CopyJob/SyncJob. * Upon completion of uploading/staging all blocks, all blocks for a destination object are committed together. More things to consider about this implementation: Upload ID handling: Azure doesn't really have a concept equivalent to AWS's upload IDs. Instead, blobs are created immediately and blocks are associated with a blob via block IDs. My workaround of using the blob name as the upload ID should work since I only use upload_id to distinguish between requests in the finalize() method Block IDs: It's worth noting that Azure requires block IDs to be of the same length. I've appropriately handled this by formatting the IDs to be of length len("{number of digits in max blocks supported by Azure (50000) = 5}{destination_object_key}"). --------- Co-authored-by: Sarah Wooders <[email protected]>
* Modified the tests so that they load from an actual quota file instead of me defining a dictionary. * Modified planner so that it can accept a file name for the quota limits (default to the skyplane config quota files) * Added more tests for error conditions (no quota file is provided + quota file is provided but the requested region is not included in the quota file) --------- Co-authored-by: Sarah Wooders <[email protected]> Co-authored-by: Asim Biswal <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.