You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
DynamoFS storage is quite expensive, while S3 has reasonable price.
S3 allows parallel uploads which can provide data transfers faster than DynamoFS fastest option.
Also, storing large files (GBs) via 64k blocks is very inefficient.
This feature will store data in S3:
1 S3 file per file-system file, the same name and path (in some bucket)
Multiple sequential writes will result in multiple parts of a multi-part upload
Multi-part upload closes when the file is closed after write
File cannot be written to again after the initial open due to immutable nature of S3 (write-once)
While the file is being written to, the size and attributes are updated immediately but the file is kept write-locked until it is closed (because it cannot be read from S3 until all multipart uploads finish)
After the write, many reads can be executed as usual
Reads will be blocked at storage level while the file is being assembled from parts (S3-eventual-consistency) via spin-lock
The text was updated successfully, but these errors were encountered:
DynamoFS storage is quite expensive, while S3 has reasonable price.
S3 allows parallel uploads which can provide data transfers faster than DynamoFS fastest option.
Also, storing large files (GBs) via 64k blocks is very inefficient.
This feature will store data in S3:
The text was updated successfully, but these errors were encountered: