-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GH-38333: [C++][FS][Azure] Implement file writes #38780
GH-38333: [C++][FS][Azure] Implement file writes #38780
Conversation
I think this is ready for review apart from its currently based on top of #38773. Once that PR merges the diff will be more readable. |
OK. I'll merge #38773 right now. Please rebase on main. |
480be61
to
ed29906
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(My review isn't completed yet. I'll continue later.)
// blocks created by other applications. | ||
new_block_id = Azure::Core::Convert::Base64Encode( | ||
std::vector<uint8_t>(new_block_id.begin(), new_block_id.end())); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to check whether the new_block_id
exists in block_ids_
or not?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we want to be 100% confident of avoiding clashes then yes but personally I think the current solution is a good compromise.
The risk should be zero when using OpenOutputStream
because every block ID will be created by this same scheme, using monotonically increasing integers. The risk when using OpenAppendStream
is that previously committed blocks used unusual names that might conflict. For example if some other writer committed one block named 00002-arrow
then that would conflict after this writer appends 2 additional blocks, and cause a corrupt blob. I think this is extremely unlikely so personally I think this is a good option. Additionally OpenAppendStream
is not implemented at all for S3 and GCS so presumably its not used much.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK. Could you describe the risk as a comment? If we find a real world problem with the risk, we can revisit the risk.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
if (metadata && metadata->size() != 0) { | ||
metadata_ = ArrowMetadataToAzureMetadata(metadata); | ||
} else if (options.default_metadata && options.default_metadata->size() != 0) { | ||
metadata_ = ArrowMetadataToAzureMetadata(options.default_metadata); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do these metadata replace the existing metadata? Should we merge with the existing metadata?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Closing/flushing an append stream will always completely replace the old metadata. This is covered by the AzuriteFileSystemTest, TestWriteMetadata
test which I largely copied from gcsfs_test.cc
.
I don't feel strongly but I think this is a reasonable choice. I think if it did merge then there would be no way to remove metadata keys through the arrow file system. Also replacing is simpler to implement than merging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see.
b04e14c
to
0178660
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
After merging your PR, Conbench analyzed the 5 benchmarking runs that have been run so far on merge-commit c1b12ca. There were no benchmark performance regressions. 🎉 The full Conbench report has more details. It also includes information about 14 possible false positives for unstable benchmarks that are known to sometimes produce them. |
### Rationale for this change Writing files is an important part of the filesystem ### What changes are included in this PR? Implements `OpenOutputStream` and `OpenAppendStream` for Azure. - Initially I started with the implementation from apache#12914 but I made quite a few changes: - Removed the different code path for hierarchical namespace accounts. There should not be any performance advantage to using special APIs only available on hierachical namespace accounts. - Only implement `ObjectAppendStream`, not `ObjectOutputStream`. `OpenOutputStream` is implemented by truncating the existing file then returning a `ObjectAppendStream`. - More precise use of `try` `catch`. Every call to Azure is wrapped in a `try` `catch` and should return a descriptive error status. - Avoid unnecessary calls to Azure. For example we now maintain the block list in memory and commit it only once on flush. apache#12914 committed the block list after each block that was staged and on flush queried Azure to get the list of uncommitted blocks. The new approach is consistent with the Azure fsspec implementation https://github.com/fsspec/adlfs/blob/092685f102c5cd215550d10e8347e5bce0e2b93d/adlfs/spec.py#L2009 - Adjust the block_ids slightly to minimise the risk of them conflicting with blocks written by other blob storage clients. - Implement metadata writes. Includes adding default metadata to `AzureOptions`. - Tests are based on the `gscfs_test.cc` but I added a couple of extra. - Handle the TODO(apacheGH-38780) comments for using the Azure fs to write data in tests ### Are these changes tested? Yes. Everything should be covered by azurite tests ### Are there any user-facing changes? Yes. The Azure filesystem now supports file writes. * Closes: apache#38333 Lead-authored-by: Thomas Newton <[email protected]> Co-authored-by: Sutou Kouhei <[email protected]> Signed-off-by: Sutou Kouhei <[email protected]>
Rationale for this change
Writing files is an important part of the filesystem
What changes are included in this PR?
Implements
OpenOutputStream
andOpenAppendStream
for Azure.ObjectAppendStream
, notObjectOutputStream
.OpenOutputStream
is implemented by truncating the existing file then returning aObjectAppendStream
.try
catch
. Every call to Azure is wrapped in atry
catch
and should return a descriptive error status.AzureOptions
.gscfs_test.cc
but I added a couple of extra.Are these changes tested?
Yes. Everything should be covered by azurite tests
Are there any user-facing changes?
Yes. The Azure filesystem now supports file writes.