Change image upload chunk size from 512 * 3/4 KiB
to 512 KiB
#1650
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
@luqmana found while working on oxidecomputer/omicron#3559 and #1469 that changing the chunk size to match what the CLI does makes it impossible to reproduce the image upload hang. Obviously we will want to figure out why it didn't work with the other chunk size (it should work with any) but in the meantime we can prevent the issue from happening.
This made me realize the old chunk size was based on a misunderstanding on my part of how the limit was being applied. I had it backwards. If base64 makes the data 1/3 bigger, and we are chopping up base64ed data, we can actually send a string of length
3/4 * MAX
(notMAX * 3/4
) because that will decode to a byte array of lengthMAX
. Here is the bit in crucible where the length of aVec<u8>
is checked againstMAX_CHUNK_SIZE
.On top of that, I was doubly confused — I forgot the
File.prototype.slice(start, end)
API takes byte offsets as start and end, even though we are able to read the contents of the slice more or less directly as a base64 string. So 512 KiB is in fact the correct max, matching the CLI and crucible.